Jan 05 21:34:04 crc systemd[1]: Starting Kubernetes Kubelet... Jan 05 21:34:04 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 05 21:34:04 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 05 21:34:05 crc kubenswrapper[5000]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.166795 5000 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170329 5000 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170350 5000 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170355 5000 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170359 5000 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170363 5000 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170367 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170371 5000 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170375 5000 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170379 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170390 5000 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170394 5000 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170399 5000 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170404 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170408 5000 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170413 5000 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170418 5000 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170421 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170425 5000 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170429 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170433 5000 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170437 5000 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170441 5000 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170444 5000 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170447 5000 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170451 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170455 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170458 5000 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170462 5000 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170465 5000 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170469 5000 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170472 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170476 5000 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170479 5000 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170482 5000 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170486 5000 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170490 5000 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170493 5000 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170497 5000 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170500 5000 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170504 5000 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170507 5000 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170511 5000 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170516 5000 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170520 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170524 5000 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170527 5000 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170531 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170534 5000 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170538 5000 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170542 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170545 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170550 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170554 5000 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170558 5000 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170562 5000 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170566 5000 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170570 5000 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170574 5000 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170578 5000 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170582 5000 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170586 5000 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170590 5000 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170594 5000 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170598 5000 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170601 5000 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170604 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170608 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170611 5000 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170615 5000 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170618 5000 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.170621 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170828 5000 flags.go:64] FLAG: --address="0.0.0.0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170840 5000 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170848 5000 flags.go:64] FLAG: --anonymous-auth="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170855 5000 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170860 5000 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170865 5000 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170871 5000 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170876 5000 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170881 5000 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170886 5000 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170907 5000 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170911 5000 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170953 5000 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170958 5000 flags.go:64] FLAG: --cgroup-root="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170962 5000 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170966 5000 flags.go:64] FLAG: --client-ca-file="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170970 5000 flags.go:64] FLAG: --cloud-config="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170974 5000 flags.go:64] FLAG: --cloud-provider="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170978 5000 flags.go:64] FLAG: --cluster-dns="[]" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170983 5000 flags.go:64] FLAG: --cluster-domain="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170987 5000 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170992 5000 flags.go:64] FLAG: --config-dir="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.170996 5000 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171000 5000 flags.go:64] FLAG: --container-log-max-files="5" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171006 5000 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171010 5000 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171014 5000 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171019 5000 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171023 5000 flags.go:64] FLAG: --contention-profiling="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171027 5000 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171031 5000 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171035 5000 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171039 5000 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171044 5000 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171049 5000 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171053 5000 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171057 5000 flags.go:64] FLAG: --enable-load-reader="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171061 5000 flags.go:64] FLAG: --enable-server="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171065 5000 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171070 5000 flags.go:64] FLAG: --event-burst="100" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171074 5000 flags.go:64] FLAG: --event-qps="50" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171078 5000 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171083 5000 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171087 5000 flags.go:64] FLAG: --eviction-hard="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171092 5000 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171096 5000 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171100 5000 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171105 5000 flags.go:64] FLAG: --eviction-soft="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171109 5000 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171113 5000 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171117 5000 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171123 5000 flags.go:64] FLAG: --experimental-mounter-path="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171127 5000 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171131 5000 flags.go:64] FLAG: --fail-swap-on="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171135 5000 flags.go:64] FLAG: --feature-gates="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171140 5000 flags.go:64] FLAG: --file-check-frequency="20s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171144 5000 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171148 5000 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171152 5000 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171157 5000 flags.go:64] FLAG: --healthz-port="10248" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171161 5000 flags.go:64] FLAG: --help="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171165 5000 flags.go:64] FLAG: --hostname-override="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171170 5000 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171174 5000 flags.go:64] FLAG: --http-check-frequency="20s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171178 5000 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171182 5000 flags.go:64] FLAG: --image-credential-provider-config="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171187 5000 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171203 5000 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171208 5000 flags.go:64] FLAG: --image-service-endpoint="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171212 5000 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171216 5000 flags.go:64] FLAG: --kube-api-burst="100" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171220 5000 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171224 5000 flags.go:64] FLAG: --kube-api-qps="50" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171228 5000 flags.go:64] FLAG: --kube-reserved="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171233 5000 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171236 5000 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171240 5000 flags.go:64] FLAG: --kubelet-cgroups="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171244 5000 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171248 5000 flags.go:64] FLAG: --lock-file="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171252 5000 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171257 5000 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171261 5000 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171267 5000 flags.go:64] FLAG: --log-json-split-stream="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171271 5000 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171275 5000 flags.go:64] FLAG: --log-text-split-stream="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171279 5000 flags.go:64] FLAG: --logging-format="text" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171283 5000 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171287 5000 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171291 5000 flags.go:64] FLAG: --manifest-url="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171295 5000 flags.go:64] FLAG: --manifest-url-header="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171301 5000 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171305 5000 flags.go:64] FLAG: --max-open-files="1000000" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171311 5000 flags.go:64] FLAG: --max-pods="110" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171316 5000 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171320 5000 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171325 5000 flags.go:64] FLAG: --memory-manager-policy="None" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171330 5000 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171335 5000 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171339 5000 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171344 5000 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171353 5000 flags.go:64] FLAG: --node-status-max-images="50" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171357 5000 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171361 5000 flags.go:64] FLAG: --oom-score-adj="-999" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171365 5000 flags.go:64] FLAG: --pod-cidr="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171369 5000 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171375 5000 flags.go:64] FLAG: --pod-manifest-path="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171379 5000 flags.go:64] FLAG: --pod-max-pids="-1" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171383 5000 flags.go:64] FLAG: --pods-per-core="0" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171387 5000 flags.go:64] FLAG: --port="10250" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171391 5000 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171395 5000 flags.go:64] FLAG: --provider-id="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171399 5000 flags.go:64] FLAG: --qos-reserved="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171403 5000 flags.go:64] FLAG: --read-only-port="10255" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171407 5000 flags.go:64] FLAG: --register-node="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171411 5000 flags.go:64] FLAG: --register-schedulable="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171416 5000 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171423 5000 flags.go:64] FLAG: --registry-burst="10" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171427 5000 flags.go:64] FLAG: --registry-qps="5" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171431 5000 flags.go:64] FLAG: --reserved-cpus="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171435 5000 flags.go:64] FLAG: --reserved-memory="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171441 5000 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171445 5000 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171449 5000 flags.go:64] FLAG: --rotate-certificates="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171454 5000 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171458 5000 flags.go:64] FLAG: --runonce="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171462 5000 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171466 5000 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171470 5000 flags.go:64] FLAG: --seccomp-default="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171474 5000 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171478 5000 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171482 5000 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171486 5000 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171491 5000 flags.go:64] FLAG: --storage-driver-password="root" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171495 5000 flags.go:64] FLAG: --storage-driver-secure="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171499 5000 flags.go:64] FLAG: --storage-driver-table="stats" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171503 5000 flags.go:64] FLAG: --storage-driver-user="root" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171507 5000 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171511 5000 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171515 5000 flags.go:64] FLAG: --system-cgroups="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171519 5000 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171525 5000 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171529 5000 flags.go:64] FLAG: --tls-cert-file="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171533 5000 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171538 5000 flags.go:64] FLAG: --tls-min-version="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171542 5000 flags.go:64] FLAG: --tls-private-key-file="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171546 5000 flags.go:64] FLAG: --topology-manager-policy="none" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171550 5000 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171554 5000 flags.go:64] FLAG: --topology-manager-scope="container" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171558 5000 flags.go:64] FLAG: --v="2" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171563 5000 flags.go:64] FLAG: --version="false" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171569 5000 flags.go:64] FLAG: --vmodule="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171574 5000 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.171579 5000 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171667 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171673 5000 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171677 5000 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171681 5000 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171686 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171690 5000 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171694 5000 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171698 5000 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171701 5000 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171705 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171708 5000 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171712 5000 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171716 5000 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171719 5000 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171723 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171726 5000 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171729 5000 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171733 5000 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171736 5000 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171740 5000 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171743 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171747 5000 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171750 5000 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171753 5000 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171757 5000 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171762 5000 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171765 5000 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171769 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171772 5000 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171775 5000 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171779 5000 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171782 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171786 5000 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171792 5000 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171798 5000 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171804 5000 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171809 5000 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171814 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171818 5000 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171823 5000 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171827 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171832 5000 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171836 5000 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171842 5000 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171848 5000 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171853 5000 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171857 5000 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171861 5000 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171866 5000 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171870 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171874 5000 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171878 5000 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171881 5000 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171885 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171963 5000 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171969 5000 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171974 5000 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171977 5000 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171981 5000 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171985 5000 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171988 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171992 5000 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171995 5000 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.171999 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172002 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172006 5000 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172010 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172014 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172017 5000 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172021 5000 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.172024 5000 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.172037 5000 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.183607 5000 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.183671 5000 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183796 5000 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183805 5000 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183810 5000 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183814 5000 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183820 5000 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183826 5000 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183830 5000 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183834 5000 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183838 5000 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183842 5000 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183846 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183850 5000 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183855 5000 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183859 5000 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183863 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183867 5000 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183870 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183874 5000 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183877 5000 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183880 5000 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183884 5000 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183902 5000 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183906 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183910 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183914 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183917 5000 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183954 5000 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183958 5000 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183961 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183965 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183970 5000 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183979 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183983 5000 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183990 5000 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183995 5000 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.183999 5000 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184003 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184007 5000 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184011 5000 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184015 5000 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184019 5000 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184023 5000 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184027 5000 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184031 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184036 5000 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184040 5000 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184045 5000 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184048 5000 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184052 5000 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184056 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184060 5000 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184064 5000 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184069 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184073 5000 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184076 5000 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184080 5000 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184084 5000 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184087 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184092 5000 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184095 5000 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184100 5000 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184103 5000 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184107 5000 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184111 5000 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184115 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184119 5000 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184123 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184127 5000 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184135 5000 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184143 5000 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184154 5000 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.184163 5000 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184300 5000 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184308 5000 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184313 5000 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184318 5000 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184328 5000 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184332 5000 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184335 5000 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184339 5000 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184343 5000 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184346 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184350 5000 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184354 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184357 5000 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184361 5000 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184365 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184368 5000 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184372 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184375 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184379 5000 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184383 5000 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184386 5000 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184390 5000 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184393 5000 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184397 5000 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184401 5000 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184405 5000 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184412 5000 feature_gate.go:330] unrecognized feature gate: Example Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184418 5000 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184423 5000 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184427 5000 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184431 5000 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184435 5000 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184441 5000 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184445 5000 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184449 5000 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184452 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184457 5000 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184462 5000 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184466 5000 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184470 5000 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184474 5000 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184477 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184481 5000 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184484 5000 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184488 5000 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184491 5000 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184495 5000 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184498 5000 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184502 5000 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184505 5000 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184509 5000 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184512 5000 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184515 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184519 5000 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184522 5000 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184526 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184530 5000 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184534 5000 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184538 5000 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184543 5000 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184547 5000 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184551 5000 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184555 5000 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184559 5000 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184563 5000 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184566 5000 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184571 5000 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184575 5000 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184579 5000 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184583 5000 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.184587 5000 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.184594 5000 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.184981 5000 server.go:940] "Client rotation is on, will bootstrap in background" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.188228 5000 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.188380 5000 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.189213 5000 server.go:997] "Starting client certificate rotation" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.189274 5000 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.189510 5000 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-17 19:18:30.241652391 +0000 UTC Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.189616 5000 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 285h44m25.052040137s for next certificate rotation Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.196588 5000 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.199506 5000 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.209810 5000 log.go:25] "Validated CRI v1 runtime API" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.233252 5000 log.go:25] "Validated CRI v1 image API" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.235283 5000 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.237451 5000 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-05-21-30-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.237486 5000 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.258850 5000 manager.go:217] Machine: {Timestamp:2026-01-05 21:34:05.257141015 +0000 UTC m=+0.213343534 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:57cd32f3-2b5a-4a0d-8652-c015d388936a BootID:fe814346-f2cb-4c2c-b34c-7aac41ab93c7 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:17:6a:8f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:17:6a:8f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b5:67:c6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:cd:20 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:33:fb:e8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:17:fa:c2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:d3:45:5a:79:f0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:dd:b9:44:fa:d5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.259278 5000 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.259629 5000 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260331 5000 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260514 5000 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260556 5000 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260801 5000 topology_manager.go:138] "Creating topology manager with none policy" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260813 5000 container_manager_linux.go:303] "Creating device plugin manager" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260940 5000 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.260968 5000 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.261330 5000 state_mem.go:36] "Initialized new in-memory state store" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.261683 5000 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.262570 5000 kubelet.go:418] "Attempting to sync node with API server" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.262593 5000 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.262651 5000 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.262667 5000 kubelet.go:324] "Adding apiserver pod source" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.262681 5000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.264450 5000 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.264764 5000 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.265567 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.265664 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.265639 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.265731 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.265986 5000 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266611 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266667 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266676 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266682 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266696 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266703 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266710 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266722 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266733 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266742 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266753 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266760 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.266938 5000 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.267611 5000 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.268023 5000 server.go:1280] "Started kubelet" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.268581 5000 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.268764 5000 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.269587 5000 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 05 21:34:05 crc systemd[1]: Started Kubernetes Kubelet. Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.269933 5000 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.270339 5000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.270946 5000 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.270977 5000 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.270359 5000 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:45:20.823577338 +0000 UTC Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.271219 5000 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.271506 5000 factory.go:55] Registering systemd factory Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.271535 5000 factory.go:221] Registration of the systemd container factory successfully Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.270979 5000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1887f34265ccd167 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:34:05.267521895 +0000 UTC m=+0.223724364,LastTimestamp:2026-01-05 21:34:05.267521895 +0000 UTC m=+0.223724364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.271655 5000 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.271847 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.271957 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.272100 5000 factory.go:153] Registering CRI-O factory Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.272117 5000 factory.go:221] Registration of the crio container factory successfully Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.272425 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.272971 5000 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.273015 5000 factory.go:103] Registering Raw factory Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.273030 5000 manager.go:1196] Started watching for new ooms in manager Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.273603 5000 manager.go:319] Starting recovery of all containers Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.279880 5000 server.go:460] "Adding debug handlers to kubelet server" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286841 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286918 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286933 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286943 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286969 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.286980 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287004 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287013 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287028 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287036 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287045 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287057 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287066 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287077 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287103 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287113 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287121 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287129 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287138 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287146 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287155 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287177 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287193 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287202 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287212 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287237 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287252 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287267 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287276 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287287 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287296 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287305 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287318 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287328 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287337 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287350 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287382 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287393 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287405 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287416 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287427 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287438 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287449 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287459 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287470 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287481 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287490 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287501 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287510 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287521 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287531 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287541 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287570 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287584 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287595 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287606 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287616 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287627 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287636 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287646 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287656 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287667 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287676 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287685 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287693 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287703 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287714 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287723 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287732 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287742 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287751 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287760 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287768 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287777 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287785 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287794 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287805 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287815 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287825 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287835 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287844 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287855 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287881 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287905 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287915 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287926 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287939 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287952 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287962 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287972 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.287998 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288009 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288019 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288030 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288041 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288051 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288061 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288073 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288108 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288119 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288129 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288142 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288152 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288163 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288178 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288189 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288207 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288218 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288229 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288243 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288255 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288266 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288275 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288285 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288296 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288306 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288317 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288327 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288339 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288350 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288361 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288375 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288387 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288399 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288410 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288424 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288436 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288448 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288459 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288470 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288482 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288493 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288505 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288517 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288527 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288538 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288548 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288557 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288568 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288577 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288588 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288602 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288614 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288626 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288638 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288646 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.288655 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289292 5000 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289332 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289343 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289353 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289368 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289378 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289388 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289397 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289408 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289418 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289430 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289440 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289450 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289460 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289470 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289480 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289490 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289499 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289510 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289522 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289533 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289543 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289556 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289567 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289578 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289588 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289604 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289614 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289625 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289637 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289648 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289667 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289678 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289690 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289702 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289713 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289725 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289741 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289752 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289763 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289774 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289785 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289795 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289807 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289818 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289830 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289846 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289856 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289867 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289878 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289907 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289918 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289931 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289940 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289951 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289960 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289971 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289982 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.289992 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.290001 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.290010 5000 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.290019 5000 reconstruct.go:97] "Volume reconstruction finished" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.290026 5000 reconciler.go:26] "Reconciler: start to sync state" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.303116 5000 manager.go:324] Recovery completed Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.312545 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.316070 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.316147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.316160 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.317673 5000 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.317698 5000 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.317723 5000 state_mem.go:36] "Initialized new in-memory state store" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.319907 5000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.322433 5000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.322492 5000 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.322520 5000 kubelet.go:2335] "Starting kubelet main sync loop" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.322576 5000 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.346307 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.346462 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.353646 5000 policy_none.go:49] "None policy: Start" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.354595 5000 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.354623 5000 state_mem.go:35] "Initializing new in-memory state store" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.371767 5000 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.404341 5000 manager.go:334] "Starting Device Plugin manager" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.404420 5000 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.404442 5000 server.go:79] "Starting device plugin registration server" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.404945 5000 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.404966 5000 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.405348 5000 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.405438 5000 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.405449 5000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.412236 5000 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.423525 5000 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.423701 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.424971 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.425009 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.425020 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.425172 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.425564 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.425623 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426040 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426191 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426296 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426649 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426659 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.426813 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.427170 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428097 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428374 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428397 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428410 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.428538 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429025 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429063 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429816 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429849 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.429952 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430317 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430366 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430692 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430712 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430721 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430781 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430861 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.430923 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.431304 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.431382 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.431680 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.431702 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.431711 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.434150 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.434175 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.434186 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.474273 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.491828 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.491870 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.491917 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.491934 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492007 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492109 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492177 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492235 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492291 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492323 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492351 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492425 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492454 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492475 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.492511 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.505972 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.506974 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.507007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.507018 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.507056 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.507520 5000 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594113 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594178 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594205 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594228 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594248 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594269 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594288 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594307 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594330 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594337 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594348 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594372 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594354 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594363 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594432 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594403 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594427 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594463 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594396 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594478 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594516 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594598 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594624 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594635 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594655 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594694 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594731 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594741 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594751 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.594827 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.708625 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.710545 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.710663 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.710678 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.711132 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.712208 5000 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.748556 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.771490 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.772447 5000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1887f34265ccd167 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:34:05.267521895 +0000 UTC m=+0.223724364,LastTimestamp:2026-01-05 21:34:05.267521895 +0000 UTC m=+0.223724364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.776925 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.782319 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.783060 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-669f8a7350178fff256d47ffe91aab538fea2a7d3a43c0a04b52620ad39d39c2 WatchSource:0}: Error finding container 669f8a7350178fff256d47ffe91aab538fea2a7d3a43c0a04b52620ad39d39c2: Status 404 returned error can't find the container with id 669f8a7350178fff256d47ffe91aab538fea2a7d3a43c0a04b52620ad39d39c2 Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.789427 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6b8917c318390aec43ae74585514cfb9ac9f2ef32d3efb8a428bca5e46b54fc0 WatchSource:0}: Error finding container 6b8917c318390aec43ae74585514cfb9ac9f2ef32d3efb8a428bca5e46b54fc0: Status 404 returned error can't find the container with id 6b8917c318390aec43ae74585514cfb9ac9f2ef32d3efb8a428bca5e46b54fc0 Jan 05 21:34:05 crc kubenswrapper[5000]: I0105 21:34:05.803272 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.805161 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-32682638fc1f7fe401bba574ea8eead1c689cfbc8ae33be58c02d8a06aafee13 WatchSource:0}: Error finding container 32682638fc1f7fe401bba574ea8eead1c689cfbc8ae33be58c02d8a06aafee13: Status 404 returned error can't find the container with id 32682638fc1f7fe401bba574ea8eead1c689cfbc8ae33be58c02d8a06aafee13 Jan 05 21:34:05 crc kubenswrapper[5000]: W0105 21:34:05.828549 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-30bc29a1c65883a2763b810dc6a65255636b141622817ea003917c935d86269d WatchSource:0}: Error finding container 30bc29a1c65883a2763b810dc6a65255636b141622817ea003917c935d86269d: Status 404 returned error can't find the container with id 30bc29a1c65883a2763b810dc6a65255636b141622817ea003917c935d86269d Jan 05 21:34:05 crc kubenswrapper[5000]: E0105 21:34:05.875218 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.112794 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.114071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.114108 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.114117 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.114139 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.114531 5000 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 05 21:34:06 crc kubenswrapper[5000]: W0105 21:34:06.207086 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.207167 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.269032 5000 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.271399 5000 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:20:37.840168031 +0000 UTC Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.271441 5000 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 235h46m31.568729751s for next certificate rotation Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.327419 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.327547 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"32682638fc1f7fe401bba574ea8eead1c689cfbc8ae33be58c02d8a06aafee13"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.328706 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65" exitCode=0 Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.328737 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.328768 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e86b901d9586fcca09a26a86d5e84299c9369cf692f33b013899910da2a60089"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.328862 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.329861 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.329882 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.329908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.330453 5000 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8637314c825e0899349897c5865337e8c9c3c2315e278b1d56892ad00c188dcb" exitCode=0 Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.330513 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8637314c825e0899349897c5865337e8c9c3c2315e278b1d56892ad00c188dcb"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.330531 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b8917c318390aec43ae74585514cfb9ac9f2ef32d3efb8a428bca5e46b54fc0"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.330607 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.331276 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.331330 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.331352 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.331580 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332234 5000 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4817234685b60dae30445b9db4febda5426106f6c76a105de23b0c73b791a493" exitCode=0 Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332292 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4817234685b60dae30445b9db4febda5426106f6c76a105de23b0c73b791a493"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332326 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"669f8a7350178fff256d47ffe91aab538fea2a7d3a43c0a04b52620ad39d39c2"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332694 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332734 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332764 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.332854 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.333951 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.333985 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.333997 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.334180 5000 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3" exitCode=0 Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.334218 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.334236 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"30bc29a1c65883a2763b810dc6a65255636b141622817ea003917c935d86269d"} Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.334348 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.335219 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.335245 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.335255 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: W0105 21:34:06.346402 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.346582 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:06 crc kubenswrapper[5000]: W0105 21:34:06.444732 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.444832 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:06 crc kubenswrapper[5000]: W0105 21:34:06.592073 5000 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.592156 5000 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.675863 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.915521 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.917203 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.917244 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.917254 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:06 crc kubenswrapper[5000]: I0105 21:34:06.917286 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:06 crc kubenswrapper[5000]: E0105 21:34:06.917657 5000 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.269022 5000 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.339803 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.339860 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.339875 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.339879 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.342179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.342204 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.342213 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.354648 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.354701 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.354715 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.354726 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.355828 5000 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5999ba2bf49205a73b44455e861d5a5e96032458f8e09955b6375f88d172e3e3" exitCode=0 Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.355880 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5999ba2bf49205a73b44455e861d5a5e96032458f8e09955b6375f88d172e3e3"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.356008 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.356990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.357023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.357035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.358658 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aab41b2ac39cca402e109d4baa3a5c0f0190fbe5da673dc3c553960bf4f48711"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.359074 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.359752 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.359774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.359790 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.362071 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.362092 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.362102 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928"} Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.362352 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.362907 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.363248 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:07 crc kubenswrapper[5000]: I0105 21:34:07.363361 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.294300 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.368639 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e"} Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.368708 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.369718 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.369813 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.369877 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.371671 5000 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="08c4367c06a20c5cdcc1fd3e6eba3b312450706b1db80bc7b08e262833ead6ea" exitCode=0 Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.371705 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"08c4367c06a20c5cdcc1fd3e6eba3b312450706b1db80bc7b08e262833ead6ea"} Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.371912 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.371928 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.372077 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.372970 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.372977 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373068 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373085 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373014 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373294 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.373471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.518205 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.519118 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.519147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.519158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:08 crc kubenswrapper[5000]: I0105 21:34:08.519182 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e9508ad086f704d530c6e7fb9ee6682f8379265de50780c449e047a058feb8b"} Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378841 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378850 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f4fdcd0c1f6423c1009f863f5d5375277ee55258c9e0be8902dca75105b1523"} Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378872 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05cf9563061f7808aa7bed0515193ee70c0849b96238964d59ca4b6e3ce2ebcf"} Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378883 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378917 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79b87a0de398e3ce4a03f0da1dda4b0285488f1600861b9ce04f2aac4df13006"} Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.378934 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"50e28812dc979badf172fcc58d5dcdcbe0827369904affc00d198504802ab1ca"} Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.379079 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.380416 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.380443 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.380451 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.381087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.381111 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.381119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.439257 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.439391 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.440404 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.440435 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.440446 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:09 crc kubenswrapper[5000]: I0105 21:34:09.506199 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.381047 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.381960 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.382003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.382016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.535325 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.535525 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.536964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.537000 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.537016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:10 crc kubenswrapper[5000]: I0105 21:34:10.606055 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 05 21:34:11 crc kubenswrapper[5000]: I0105 21:34:11.383340 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:11 crc kubenswrapper[5000]: I0105 21:34:11.384564 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:11 crc kubenswrapper[5000]: I0105 21:34:11.384619 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:11 crc kubenswrapper[5000]: I0105 21:34:11.384661 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.108477 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.108738 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.108803 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.110538 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.110581 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.110593 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.296845 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.359999 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.360237 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.362069 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.362119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.362128 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.385538 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.385553 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.385589 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.386689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.386721 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.386730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.387093 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.387150 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:12 crc kubenswrapper[5000]: I0105 21:34:12.387163 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.379083 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.379310 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.381107 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.381187 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.381213 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.384455 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.388694 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.390009 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.390035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:13 crc kubenswrapper[5000]: I0105 21:34:13.390044 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.287829 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.288238 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.289763 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.289819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.289843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.360996 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:34:15 crc kubenswrapper[5000]: I0105 21:34:15.361130 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:34:15 crc kubenswrapper[5000]: E0105 21:34:15.412578 5000 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 05 21:34:17 crc kubenswrapper[5000]: I0105 21:34:17.838316 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 21:34:17 crc kubenswrapper[5000]: I0105 21:34:17.838418 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 21:34:17 crc kubenswrapper[5000]: I0105 21:34:17.861220 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 05 21:34:17 crc kubenswrapper[5000]: I0105 21:34:17.861299 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 05 21:34:19 crc kubenswrapper[5000]: I0105 21:34:19.445987 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:19 crc kubenswrapper[5000]: I0105 21:34:19.446228 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:19 crc kubenswrapper[5000]: I0105 21:34:19.447614 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:19 crc kubenswrapper[5000]: I0105 21:34:19.447671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:19 crc kubenswrapper[5000]: I0105 21:34:19.447690 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.638064 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.638316 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.639876 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.639971 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.640032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:20 crc kubenswrapper[5000]: I0105 21:34:20.658756 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 05 21:34:21 crc kubenswrapper[5000]: I0105 21:34:21.412025 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:21 crc kubenswrapper[5000]: I0105 21:34:21.412807 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:21 crc kubenswrapper[5000]: I0105 21:34:21.412975 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:21 crc kubenswrapper[5000]: I0105 21:34:21.413002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.303969 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.304105 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.305227 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.305285 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.305305 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.312025 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.414087 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.418451 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.418527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.418552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:22 crc kubenswrapper[5000]: E0105 21:34:22.845111 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.848054 5000 trace.go:236] Trace[2057897818]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:34:08.751) (total time: 14096ms): Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[2057897818]: ---"Objects listed" error: 14096ms (21:34:22.847) Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[2057897818]: [14.096152715s] [14.096152715s] END Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.848088 5000 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.849622 5000 trace.go:236] Trace[681430129]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:34:09.038) (total time: 13811ms): Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[681430129]: ---"Objects listed" error: 13811ms (21:34:22.849) Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[681430129]: [13.811057413s] [13.811057413s] END Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.849672 5000 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.850999 5000 trace.go:236] Trace[1047020800]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:34:08.382) (total time: 14468ms): Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[1047020800]: ---"Objects listed" error: 14468ms (21:34:22.850) Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[1047020800]: [14.468153681s] [14.468153681s] END Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.851017 5000 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.851927 5000 trace.go:236] Trace[156800321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Jan-2026 21:34:09.408) (total time: 13443ms): Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[156800321]: ---"Objects listed" error: 13443ms (21:34:22.851) Jan 05 21:34:22 crc kubenswrapper[5000]: Trace[156800321]: [13.443334336s] [13.443334336s] END Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.851948 5000 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 21:34:22 crc kubenswrapper[5000]: E0105 21:34:22.852984 5000 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.883700 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54086->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.883775 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54086->192.168.126.11:17697: read: connection reset by peer" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.883703 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54074->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.883882 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54074->192.168.126.11:17697: read: connection reset by peer" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.884307 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.884409 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.893095 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.897424 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:22 crc kubenswrapper[5000]: I0105 21:34:22.928232 5000 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.273103 5000 apiserver.go:52] "Watching apiserver" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.276119 5000 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.276470 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.276828 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.277233 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.277371 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.277578 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.277858 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.278052 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.278182 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.278129 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.278997 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.279077 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.281654 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.283299 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.283526 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.283602 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.285771 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.285853 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.285990 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.286028 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.307286 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.331257 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.349781 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.364006 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.372716 5000 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.386540 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.403905 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.417876 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.419924 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e" exitCode=255 Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.419983 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e"} Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.421327 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.426650 5000 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.429546 5000 scope.go:117] "RemoveContainer" containerID="5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.429761 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431392 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431430 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431470 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431493 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431510 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431551 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.431572 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:23.931555136 +0000 UTC m=+18.887757605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431592 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431618 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431635 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431652 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431668 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431687 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431703 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431719 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431735 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431749 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431769 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431793 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431808 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431819 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431844 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431901 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431921 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431937 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431953 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431968 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.431990 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.432580 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.432013 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.432786 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.432970 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433113 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433158 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433191 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433220 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433267 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433258 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433294 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433326 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433355 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433391 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433423 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433458 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433488 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433510 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433504 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433523 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433542 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433563 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433591 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433630 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433665 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433703 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433728 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433762 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433797 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433838 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433865 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433920 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433961 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433995 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434025 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434057 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434091 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434124 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434158 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434193 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434225 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434256 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434294 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434335 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434363 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434396 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434432 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434461 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434498 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434532 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434564 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434591 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434622 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434655 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434682 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434714 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434782 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434823 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434856 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434955 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435005 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435072 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435119 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435152 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435177 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435210 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435242 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435267 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435297 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435331 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435359 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435388 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435420 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435453 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435481 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435513 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435543 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435569 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435602 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435633 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435663 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435690 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435722 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435759 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435788 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435824 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435859 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435886 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435937 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435970 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435997 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436025 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436053 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436083 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436110 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436140 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436171 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436203 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436234 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436267 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436299 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436331 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436361 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436392 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436418 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436463 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436493 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436524 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436551 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436670 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436725 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436751 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436777 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436804 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436832 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436854 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436908 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436932 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436952 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436980 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437013 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437039 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437067 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437089 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437113 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437134 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437160 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437181 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437202 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437253 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437287 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437325 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437360 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437379 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437423 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437450 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437473 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437497 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437525 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437553 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437612 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437654 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437690 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437718 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437743 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437770 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437792 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437812 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437833 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437856 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437874 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437917 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.433595 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434172 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434272 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434374 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434608 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.434831 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435035 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435174 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435261 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435436 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435535 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435616 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.435831 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436096 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436182 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436490 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436554 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.436798 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437076 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437101 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437490 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437549 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437552 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437556 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437831 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.437859 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.438309 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.440283 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.440519 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.441075 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.441531 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.441709 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.441883 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442509 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.441961 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442070 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442533 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442641 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442654 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442653 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442693 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.442957 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443015 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443200 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443326 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443439 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443459 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443501 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443614 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443986 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.444273 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.444342 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.444163 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.444454 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.444785 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.445142 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.447476 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.447790 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448001 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448048 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448272 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448308 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448548 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448640 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448677 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449179 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449215 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.443023 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449265 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449327 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449410 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449497 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449666 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449672 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449743 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449807 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449878 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449950 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.449970 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450022 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450155 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450197 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.448463 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450306 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450361 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450638 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450681 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450825 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450846 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450982 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.450456 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451216 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451260 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451328 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451371 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451460 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451625 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451811 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452088 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.451265 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452315 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452497 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453133 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452152 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452572 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452742 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453446 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453507 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453565 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.452970 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453928 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.454328 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.453748 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.454625 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.456848 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457031 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457221 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457274 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457302 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457326 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457350 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457374 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457378 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457399 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457426 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457522 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457758 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457865 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458334 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458319 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458383 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458614 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458645 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458789 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458964 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.459328 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.460155 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.460198 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.460683 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.457452 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.461555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464033 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.461556 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.458182 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.461708 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.459851 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.462099 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.462129 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.462939 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.463753 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.461558 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464147 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464323 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464360 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.461607 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464228 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.462014 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464604 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464636 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464796 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464796 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.465217 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.465296 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.465302 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.465578 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.465601 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.466065 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.467021 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.467247 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469411 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469570 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.464368 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469673 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469717 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469786 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469829 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469868 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469948 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.469989 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470029 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470093 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.470102 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470132 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470167 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470201 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.470223 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:23.970198994 +0000 UTC m=+18.926401463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470249 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470284 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470399 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470436 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470625 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470749 5000 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470817 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470834 5000 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470850 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470866 5000 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470883 5000 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470934 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470952 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470965 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470978 5000 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.470990 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471003 5000 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471015 5000 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471029 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471043 5000 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471056 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471068 5000 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471080 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471093 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471105 5000 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471117 5000 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471131 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471143 5000 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471157 5000 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471171 5000 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471184 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471196 5000 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471209 5000 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471223 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471113 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471265 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471294 5000 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471311 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471329 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471344 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471357 5000 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471371 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471385 5000 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471397 5000 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471410 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471425 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471460 5000 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471473 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471486 5000 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.471498 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.471574 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:23.97155349 +0000 UTC m=+18.927756009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471501 5000 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471616 5000 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471636 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471656 5000 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471674 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471692 5000 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471711 5000 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471729 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471747 5000 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471846 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471936 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471957 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471978 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471996 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472014 5000 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472033 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472050 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472068 5000 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472086 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472104 5000 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472121 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472139 5000 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472157 5000 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472175 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472194 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472211 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472229 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472247 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472268 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472292 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472314 5000 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472338 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472361 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472388 5000 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472413 5000 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472436 5000 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472461 5000 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472486 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472511 5000 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472535 5000 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472559 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472585 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472611 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472633 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472652 5000 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472773 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472812 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472831 5000 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472852 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472880 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472932 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472951 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472968 5000 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472986 5000 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473003 5000 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473021 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473039 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473058 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473095 5000 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473114 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473157 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473175 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473193 5000 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473211 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473229 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473279 5000 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473298 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473321 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473348 5000 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471560 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471600 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.471628 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472326 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.472706 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473297 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473309 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473670 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.473749 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474070 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474099 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474119 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474177 5000 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474192 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474206 5000 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474223 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474237 5000 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474282 5000 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474295 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474307 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474352 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474366 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474379 5000 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474391 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474405 5000 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474415 5000 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474427 5000 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474439 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474451 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474462 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474473 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474485 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474498 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474511 5000 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474522 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474533 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474544 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474556 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474568 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474582 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474598 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474565 5000 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474609 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474784 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.474797 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476092 5000 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476190 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476208 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476222 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476234 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476248 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476260 5000 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476273 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476286 5000 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.476300 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.479046 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485173 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485185 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485671 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485691 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485697 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.485949 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.486195 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.486416 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.486827 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.487074 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.487202 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.487224 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.487315 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:23.987291484 +0000 UTC m=+18.943494163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.487695 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.488433 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.488613 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.488637 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.488652 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.488673 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.488691 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:23.988681063 +0000 UTC m=+18.944883762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.491689 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.493726 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.495038 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.496281 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.498902 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.501110 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.501163 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.501287 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.501741 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.501987 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.502864 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.505629 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.500512 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.507843 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.507854 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.508075 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.508124 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.508404 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.508917 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.512046 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.521410 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.525974 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.527420 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.528796 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.540172 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.553141 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.564552 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.576283 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577277 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577344 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577375 5000 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577384 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577417 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577428 5000 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577437 5000 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577446 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577457 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577465 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577477 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577487 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577497 5000 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577505 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577516 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577524 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577532 5000 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577541 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577550 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577545 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577558 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577625 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577584 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577635 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577817 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577835 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577920 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577935 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.577993 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578006 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578068 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578080 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578169 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578188 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578200 5000 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578255 5000 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578269 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578281 5000 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.578343 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.589254 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.596823 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.607779 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 05 21:34:23 crc kubenswrapper[5000]: W0105 21:34:23.616554 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-ab7b8bf5194a910157a0982c54850d5ba7e479be2482d9f36a8e1edbdd6b21ca WatchSource:0}: Error finding container ab7b8bf5194a910157a0982c54850d5ba7e479be2482d9f36a8e1edbdd6b21ca: Status 404 returned error can't find the container with id ab7b8bf5194a910157a0982c54850d5ba7e479be2482d9f36a8e1edbdd6b21ca Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.627061 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 05 21:34:23 crc kubenswrapper[5000]: W0105 21:34:23.646398 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-53cda21ccb66fa26c8f19e8cc46e88e1ed768f1c9fe3238441e2be7176c6c675 WatchSource:0}: Error finding container 53cda21ccb66fa26c8f19e8cc46e88e1ed768f1c9fe3238441e2be7176c6c675: Status 404 returned error can't find the container with id 53cda21ccb66fa26c8f19e8cc46e88e1ed768f1c9fe3238441e2be7176c6c675 Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.981584 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.981664 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:23 crc kubenswrapper[5000]: I0105 21:34:23.981705 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.981810 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.981881 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:24.981861603 +0000 UTC m=+19.938064062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.982326 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:24.982316365 +0000 UTC m=+19.938518834 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.982418 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:23 crc kubenswrapper[5000]: E0105 21:34:23.982451 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:24.982442814 +0000 UTC m=+19.938645283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.082336 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.082412 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.082534 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.082548 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.082561 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.082604 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:25.082592492 +0000 UTC m=+20.038794961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.083127 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.083140 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.083148 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.083170 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:25.083162953 +0000 UTC m=+20.039365422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.424687 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.426596 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.427851 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.429811 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.429878 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.429913 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"53cda21ccb66fa26c8f19e8cc46e88e1ed768f1c9fe3238441e2be7176c6c675"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.430816 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"540b723e189ae670c66f1f68085f5401b9c6d1c745ce698978a1283e429ca532"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.432567 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.432605 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab7b8bf5194a910157a0982c54850d5ba7e479be2482d9f36a8e1edbdd6b21ca"} Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.451328 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.479990 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.492038 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.505461 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.519861 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.532174 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.544486 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.558324 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.569601 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.580792 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.593815 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.605781 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.618978 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.631513 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.668814 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.687839 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.993014 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.993228 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.993254 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:26.993219306 +0000 UTC m=+21.949421855 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:24 crc kubenswrapper[5000]: I0105 21:34:24.993356 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.993384 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.993450 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.993461 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:26.993440343 +0000 UTC m=+21.949642812 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:24 crc kubenswrapper[5000]: E0105 21:34:24.993486 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:26.993479014 +0000 UTC m=+21.949681483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.093940 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.093989 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094092 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094106 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094115 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094156 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:27.094144258 +0000 UTC m=+22.050346727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094382 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094452 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094508 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.094586 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:27.094577161 +0000 UTC m=+22.050779630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.323235 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.323283 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.323385 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.323593 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.323603 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:25 crc kubenswrapper[5000]: E0105 21:34:25.323693 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.327201 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.327805 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.329211 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.329920 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.330976 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.331581 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.332356 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.333462 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.334132 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.335206 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.335780 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.336998 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.337531 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.338226 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.338401 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.338817 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.339410 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.340124 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.340531 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.341114 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.341676 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.342219 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.342741 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.343160 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.343769 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.344308 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.344950 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.345585 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.346125 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.346695 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.349775 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.350317 5000 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.350434 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.351238 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.352810 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.353733 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.354465 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.355942 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.356680 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.357233 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.358824 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.359671 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.360700 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.361455 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.362476 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.363083 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.363955 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.364461 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.367048 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.367902 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.368839 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.368903 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.369383 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.369813 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.372802 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.373638 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.374947 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.385983 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.399839 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.412207 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.436149 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:25 crc kubenswrapper[5000]: I0105 21:34:25.454665 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.053836 5000 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.056002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.056052 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.056064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.056116 5000 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.063326 5000 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.063592 5000 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.064500 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.064524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.064534 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.064547 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.064557 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.080623 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.084035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.084078 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.084091 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.084108 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.084120 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.096194 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.101465 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.101504 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.101515 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.101531 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.101543 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.117292 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.120972 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.121022 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.121036 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.121055 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.121069 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.133256 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.138367 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.138560 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.138685 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.138841 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.139136 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.153628 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: E0105 21:34:26.154100 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.156033 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.156077 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.156088 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.156107 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.156119 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.258291 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.258328 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.258337 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.258352 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.258360 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.360605 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.360643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.360652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.360664 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.360672 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.438674 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.451652 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.462845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.462877 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.462886 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.462917 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.462926 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.463294 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.475136 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.487029 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.497364 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.507213 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.519111 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.532526 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:26Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.565881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.565939 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.565949 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.565964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.565975 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.668819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.668863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.668875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.668923 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.668941 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.771493 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.771563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.771587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.771616 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.771639 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.874707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.874746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.874755 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.874768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.874778 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.977368 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.977420 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.977439 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.977464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:26 crc kubenswrapper[5000]: I0105 21:34:26.977481 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:26Z","lastTransitionTime":"2026-01-05T21:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.011966 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.012037 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.012076 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.012143 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.012188 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:31.012174031 +0000 UTC m=+25.968376510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.012225 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:31.012196581 +0000 UTC m=+25.968399090 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.012301 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.012391 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:31.012365126 +0000 UTC m=+25.968567635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.080719 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.080770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.080779 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.080793 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.080819 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.113593 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.113682 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113791 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113796 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113814 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113836 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113837 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113866 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.113926 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:31.113882005 +0000 UTC m=+26.070084494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.114143 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:31.114111851 +0000 UTC m=+26.070314370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.183703 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.183815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.183842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.183953 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.183978 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.287671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.287783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.287802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.287834 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.287852 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.322825 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.323031 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.323377 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.323661 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.323694 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:27 crc kubenswrapper[5000]: E0105 21:34:27.323830 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.390962 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.391038 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.391056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.391080 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.391097 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.494048 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.494128 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.494146 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.494175 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.494193 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.596585 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.596619 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.596628 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.596640 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.596649 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.699464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.699494 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.699502 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.699515 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.699525 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.802964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.803023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.803041 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.803079 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.803098 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.905629 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.905670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.905682 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.905703 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:27 crc kubenswrapper[5000]: I0105 21:34:27.905720 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:27Z","lastTransitionTime":"2026-01-05T21:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.007685 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.007732 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.007744 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.007759 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.007770 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.111041 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.111080 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.111089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.111102 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.111110 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.213744 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.214035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.214150 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.214266 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.214369 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.317396 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.317495 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.317527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.317555 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.317572 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.420561 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.420611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.420630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.420651 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.420665 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.524713 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.524798 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.524822 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.524850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.524872 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.627318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.627569 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.627715 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.627857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.628028 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.731522 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.731561 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.731572 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.731590 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.731603 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.833629 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.833676 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.833687 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.833702 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.833712 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.935731 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.936004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.936120 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.936230 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:28 crc kubenswrapper[5000]: I0105 21:34:28.936333 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:28Z","lastTransitionTime":"2026-01-05T21:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.038940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.038993 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.039002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.039017 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.039027 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.140908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.140948 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.140956 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.140970 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.140978 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.243360 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.243400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.243408 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.243422 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.243433 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.323252 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.323327 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.323415 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:29 crc kubenswrapper[5000]: E0105 21:34:29.323411 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:29 crc kubenswrapper[5000]: E0105 21:34:29.323539 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:29 crc kubenswrapper[5000]: E0105 21:34:29.323621 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.345723 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.345754 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.345763 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.345776 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.345786 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.450914 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.450964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.450975 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.450993 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.451007 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.553020 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.553064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.553075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.553090 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.553102 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.656043 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.656082 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.656094 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.656110 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.656124 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.758044 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.758072 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.758081 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.758093 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.758102 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.860474 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.860541 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.860557 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.860582 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.860602 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.882266 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xpvqx"] Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.882625 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sd8pl"] Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.882777 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7r7z6"] Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.882834 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.883485 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.883487 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.885746 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.886400 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.886987 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.887069 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.887662 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.888787 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.889007 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.889717 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.892365 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.892608 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.893035 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.893275 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.895797 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.931851 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:29Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943549 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bdlf\" (UniqueName: \"kubernetes.io/projected/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-kube-api-access-9bdlf\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943617 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrqm\" (UniqueName: \"kubernetes.io/projected/c10b7118-eb24-495a-bb8f-bc46a3c38799-kube-api-access-vdrqm\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943689 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-proxy-tls\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943784 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943816 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a481902-8b99-488e-b5b9-5fbc3800a0c9-hosts-file\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943842 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-system-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943917 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-multus\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.943964 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-kubelet\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944017 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngq2w\" (UniqueName: \"kubernetes.io/projected/9a481902-8b99-488e-b5b9-5fbc3800a0c9-kube-api-access-ngq2w\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944046 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-bin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944080 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-hostroot\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944124 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-os-release\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944155 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-conf-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944180 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-daemon-config\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944205 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-multus-certs\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944229 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-socket-dir-parent\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944254 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-k8s-cni-cncf-io\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944293 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-rootfs\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944320 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944349 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-cnibin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-cni-binary-copy\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944456 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-netns\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.944487 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-etc-kubernetes\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.952274 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:29Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.963488 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.963521 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.963533 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.963546 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.963558 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:29Z","lastTransitionTime":"2026-01-05T21:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:29 crc kubenswrapper[5000]: I0105 21:34:29.988449 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:29Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.013191 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.031495 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.043598 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045062 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-multus-certs\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-conf-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045105 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-daemon-config\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045121 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-socket-dir-parent\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045137 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-k8s-cni-cncf-io\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045159 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-rootfs\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045176 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045179 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-conf-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045191 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-cnibin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045223 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-cnibin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045237 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-cni-binary-copy\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045260 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-netns\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045282 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-etc-kubernetes\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045275 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-socket-dir-parent\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045318 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bdlf\" (UniqueName: \"kubernetes.io/projected/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-kube-api-access-9bdlf\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045341 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrqm\" (UniqueName: \"kubernetes.io/projected/c10b7118-eb24-495a-bb8f-bc46a3c38799-kube-api-access-vdrqm\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045349 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-etc-kubernetes\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045343 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-k8s-cni-cncf-io\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045366 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a481902-8b99-488e-b5b9-5fbc3800a0c9-hosts-file\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045343 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-multus-certs\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045367 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-run-netns\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045434 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-system-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045368 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045485 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-system-cni-dir\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045522 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-multus\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045540 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-kubelet\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045558 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-proxy-tls\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045576 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045579 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-kubelet\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045591 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngq2w\" (UniqueName: \"kubernetes.io/projected/9a481902-8b99-488e-b5b9-5fbc3800a0c9-kube-api-access-ngq2w\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045611 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-bin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045627 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-hostroot\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045578 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-multus\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045652 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-os-release\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045678 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-host-var-lib-cni-bin\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045715 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a481902-8b99-488e-b5b9-5fbc3800a0c9-hosts-file\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045724 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-hostroot\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.045833 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c10b7118-eb24-495a-bb8f-bc46a3c38799-os-release\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.046002 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-multus-daemon-config\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.046007 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c10b7118-eb24-495a-bb8f-bc46a3c38799-cni-binary-copy\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.046304 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-mcd-auth-proxy-config\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.046352 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-rootfs\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.055061 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-proxy-tls\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.060081 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bdlf\" (UniqueName: \"kubernetes.io/projected/7e7d3ef9-ed44-43ac-826a-1b5606c8487b-kube-api-access-9bdlf\") pod \"machine-config-daemon-xpvqx\" (UID: \"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\") " pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.060490 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrqm\" (UniqueName: \"kubernetes.io/projected/c10b7118-eb24-495a-bb8f-bc46a3c38799-kube-api-access-vdrqm\") pod \"multus-sd8pl\" (UID: \"c10b7118-eb24-495a-bb8f-bc46a3c38799\") " pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.060631 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.062598 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngq2w\" (UniqueName: \"kubernetes.io/projected/9a481902-8b99-488e-b5b9-5fbc3800a0c9-kube-api-access-ngq2w\") pod \"node-resolver-7r7z6\" (UID: \"9a481902-8b99-488e-b5b9-5fbc3800a0c9\") " pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.065521 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.065552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.065562 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.065575 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.065584 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.072557 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.084756 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.097833 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.110634 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.122126 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.133656 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.144211 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.156482 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167793 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167821 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167829 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167852 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.167838 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.179448 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.191183 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.202190 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.203255 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.211333 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: W0105 21:34:30.213618 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e7d3ef9_ed44_43ac_826a_1b5606c8487b.slice/crio-0c3946dc00fb9b40420f282316768c4f6c44f22453d296a734dc5375f1b17160 WatchSource:0}: Error finding container 0c3946dc00fb9b40420f282316768c4f6c44f22453d296a734dc5375f1b17160: Status 404 returned error can't find the container with id 0c3946dc00fb9b40420f282316768c4f6c44f22453d296a734dc5375f1b17160 Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.216557 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7r7z6" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.227079 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sd8pl" Jan 05 21:34:30 crc kubenswrapper[5000]: W0105 21:34:30.230663 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a481902_8b99_488e_b5b9_5fbc3800a0c9.slice/crio-523b23ae2773823dec36fbfdc1013a0a4e9d20e0a255746a681ffa27aeb222bf WatchSource:0}: Error finding container 523b23ae2773823dec36fbfdc1013a0a4e9d20e0a255746a681ffa27aeb222bf: Status 404 returned error can't find the container with id 523b23ae2773823dec36fbfdc1013a0a4e9d20e0a255746a681ffa27aeb222bf Jan 05 21:34:30 crc kubenswrapper[5000]: W0105 21:34:30.236782 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc10b7118_eb24_495a_bb8f_bc46a3c38799.slice/crio-23c96f5c605c01e00a6c9d7d70419e51dad0ccd46e854091aef8a87e73e71619 WatchSource:0}: Error finding container 23c96f5c605c01e00a6c9d7d70419e51dad0ccd46e854091aef8a87e73e71619: Status 404 returned error can't find the container with id 23c96f5c605c01e00a6c9d7d70419e51dad0ccd46e854091aef8a87e73e71619 Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.270353 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.270385 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.270395 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.270411 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.270420 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.293171 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ht6xh"] Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.294156 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.294719 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f5k4c"] Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.295568 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.295837 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.298001 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.298216 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.298286 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.298308 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.298387 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.299417 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.299640 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.300404 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.324775 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.338613 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348761 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348811 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348831 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348853 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348873 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348919 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348940 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348962 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-os-release\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.348991 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349011 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2h8f\" (UniqueName: \"kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349033 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349053 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349083 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-cnibin\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349101 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349130 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349163 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349186 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-system-cni-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349213 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349234 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349255 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349283 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349312 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349331 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349454 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cm9\" (UniqueName: \"kubernetes.io/projected/3199cfb3-5965-4ece-879d-2f49bd4c0976-kube-api-access-62cm9\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349511 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349537 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.349705 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.350340 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.366100 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.374165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.374219 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.374237 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.374261 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.374279 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.379498 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.390835 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.405033 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.416735 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.426822 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.442051 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450187 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-cnibin\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450222 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450237 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450254 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-system-cni-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450269 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450290 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450304 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450319 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450320 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450341 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450379 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450407 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cm9\" (UniqueName: \"kubernetes.io/projected/3199cfb3-5965-4ece-879d-2f49bd4c0976-kube-api-access-62cm9\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450429 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450449 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450425 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450480 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450496 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450511 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450547 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450568 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450587 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450610 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450631 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450651 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450673 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450692 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-os-release\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450748 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450757 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450766 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2h8f\" (UniqueName: \"kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450784 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450789 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-cnibin\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450801 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.450811 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451145 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451370 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-os-release\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451452 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-binary-copy\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451454 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451483 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451507 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451518 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451543 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451586 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451647 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451736 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451789 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451765 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3199cfb3-5965-4ece-879d-2f49bd4c0976-system-cni-dir\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451751 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.451838 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.452005 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3199cfb3-5965-4ece-879d-2f49bd4c0976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.452051 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.452444 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.454122 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7r7z6" event={"ID":"9a481902-8b99-488e-b5b9-5fbc3800a0c9","Type":"ContainerStarted","Data":"523b23ae2773823dec36fbfdc1013a0a4e9d20e0a255746a681ffa27aeb222bf"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.457572 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.458368 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.461134 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.461167 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.461177 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"0c3946dc00fb9b40420f282316768c4f6c44f22453d296a734dc5375f1b17160"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.462500 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerStarted","Data":"0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.462521 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerStarted","Data":"23c96f5c605c01e00a6c9d7d70419e51dad0ccd46e854091aef8a87e73e71619"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.468473 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2h8f\" (UniqueName: \"kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f\") pod \"ovnkube-node-f5k4c\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.470728 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.470912 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cm9\" (UniqueName: \"kubernetes.io/projected/3199cfb3-5965-4ece-879d-2f49bd4c0976-kube-api-access-62cm9\") pod \"multus-additional-cni-plugins-ht6xh\" (UID: \"3199cfb3-5965-4ece-879d-2f49bd4c0976\") " pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.476424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.476461 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.476474 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.476489 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.476498 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.483458 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.496306 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.511497 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.522880 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.532411 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.557418 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.568417 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.578782 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.578842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.578855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.578871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.578882 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.579465 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.590452 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.606081 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.615170 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.619409 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.623589 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.634585 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.651943 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:30Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.682798 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.682851 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.682863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.682939 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.682948 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.787916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.787946 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.787960 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.787974 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.787984 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.890159 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.890201 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.890210 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.890225 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.890234 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.994086 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.996294 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.996431 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.996536 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:30 crc kubenswrapper[5000]: I0105 21:34:30.996618 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:30Z","lastTransitionTime":"2026-01-05T21:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.056425 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.056512 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.056543 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.056622 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.056663 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:39.056651394 +0000 UTC m=+34.012853863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.056966 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:39.056957093 +0000 UTC m=+34.013159562 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.057010 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.057038 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:39.057030945 +0000 UTC m=+34.013233414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.099534 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.099560 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.099568 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.099580 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.099588 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.157637 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.157946 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.157843 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158154 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158168 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158215 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:39.158200304 +0000 UTC m=+34.114402773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158034 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158246 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158258 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.158296 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:39.158287266 +0000 UTC m=+34.114489735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.202338 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.202371 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.202382 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.202395 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.202407 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.305701 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.305731 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.305739 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.305753 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.305761 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.323163 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.323225 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.323299 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.323399 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.323555 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:31 crc kubenswrapper[5000]: E0105 21:34:31.323418 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.409180 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.409435 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.409446 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.409461 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.409479 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.469923 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8" exitCode=0 Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.470020 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.470105 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerStarted","Data":"1328a07a7873ef75011c253082f850e030daac32e7cb1a308d11e7d71ae4cc3a"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.471337 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7r7z6" event={"ID":"9a481902-8b99-488e-b5b9-5fbc3800a0c9","Type":"ContainerStarted","Data":"405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.475685 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" exitCode=0 Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.475736 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.475763 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"6ccb50c47127fcfbee8e906d5bdd07f3c7b97e5d905ee6c5f92433458c7f224b"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.484072 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.500100 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.513031 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.513065 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.513075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.513092 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.513108 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.514003 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.526243 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.538109 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.556438 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.569920 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.581932 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.594072 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.615427 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.616119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.616160 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.616169 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.616181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.616189 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.629847 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.642747 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.656638 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.676842 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.697703 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.708522 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.719158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.719200 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.719209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.719224 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.719234 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.721872 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.734000 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.744416 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.756342 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.769671 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.792366 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.821318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.821350 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.821358 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.821396 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.821406 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.829736 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.846120 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.862754 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.877121 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:31Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.923665 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.923857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.923950 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.924096 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:31 crc kubenswrapper[5000]: I0105 21:34:31.924173 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:31Z","lastTransitionTime":"2026-01-05T21:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.027129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.027169 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.027181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.027198 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.027210 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.130057 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.130103 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.130114 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.130133 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.130144 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.232568 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.232607 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.232616 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.232632 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.232641 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.334714 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.334745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.334755 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.334770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.334781 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.437770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.438025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.438139 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.438256 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.438345 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.482550 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.482818 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.482884 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.482971 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.483027 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.483082 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.484275 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f" exitCode=0 Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.484344 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.501514 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.512790 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.532998 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.540473 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.540510 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.540519 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.540532 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.540547 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.545139 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.556794 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.567187 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.579383 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.591612 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.606277 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.620146 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.635390 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.642766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.642834 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.642844 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.642864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.642874 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.648934 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.661981 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:32Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.745280 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.745320 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.745331 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.745346 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.745368 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.847801 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.847838 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.847846 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.847863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.847872 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.950843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.950871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.950881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.950912 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:32 crc kubenswrapper[5000]: I0105 21:34:32.950923 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:32Z","lastTransitionTime":"2026-01-05T21:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.053655 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.053687 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.053694 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.053707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.053719 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.156097 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.156141 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.156152 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.156167 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.156178 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.258938 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.258980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.258989 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.259003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.259012 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.323462 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.323475 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.323553 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:33 crc kubenswrapper[5000]: E0105 21:34:33.323662 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:33 crc kubenswrapper[5000]: E0105 21:34:33.323730 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:33 crc kubenswrapper[5000]: E0105 21:34:33.323937 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.361112 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.361140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.361149 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.361163 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.361172 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.463072 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.463143 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.463165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.463186 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.463206 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.489956 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b" exitCode=0 Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.490015 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.511696 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.524825 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.540946 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.557556 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.565185 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.565232 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.565247 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.565267 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.565279 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.574301 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.584547 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-px9xc"] Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.585070 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.586878 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.587079 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.587102 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.590377 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.590616 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.601705 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.618202 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.629659 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.638725 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.650411 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.663422 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.667807 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.667843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.667858 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.667879 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.667915 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.674820 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.684332 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70ba1bce-8373-472e-a7bf-776eba738f1c-serviceca\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.684374 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70ba1bce-8373-472e-a7bf-776eba738f1c-host\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.684409 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ldj\" (UniqueName: \"kubernetes.io/projected/70ba1bce-8373-472e-a7bf-776eba738f1c-kube-api-access-26ldj\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.690404 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.700994 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.713443 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.726303 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.738003 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.748768 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.758296 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.767321 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.769608 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.769656 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.769669 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.769686 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.769697 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.783722 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.784990 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70ba1bce-8373-472e-a7bf-776eba738f1c-serviceca\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.785026 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70ba1bce-8373-472e-a7bf-776eba738f1c-host\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.785077 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26ldj\" (UniqueName: \"kubernetes.io/projected/70ba1bce-8373-472e-a7bf-776eba738f1c-kube-api-access-26ldj\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.785098 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70ba1bce-8373-472e-a7bf-776eba738f1c-host\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.786199 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70ba1bce-8373-472e-a7bf-776eba738f1c-serviceca\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.795505 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.803496 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.804531 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26ldj\" (UniqueName: \"kubernetes.io/projected/70ba1bce-8373-472e-a7bf-776eba738f1c-kube-api-access-26ldj\") pod \"node-ca-px9xc\" (UID: \"70ba1bce-8373-472e-a7bf-776eba738f1c\") " pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.814544 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.826045 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.835273 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:33Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.872166 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.872199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.872208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.872224 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.872233 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.906774 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-px9xc" Jan 05 21:34:33 crc kubenswrapper[5000]: W0105 21:34:33.917699 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ba1bce_8373_472e_a7bf_776eba738f1c.slice/crio-547e5ac27578bebf5f9d2bcff6ea9397033a51b212c2db85f5617bb54f1351d5 WatchSource:0}: Error finding container 547e5ac27578bebf5f9d2bcff6ea9397033a51b212c2db85f5617bb54f1351d5: Status 404 returned error can't find the container with id 547e5ac27578bebf5f9d2bcff6ea9397033a51b212c2db85f5617bb54f1351d5 Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.974845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.974909 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.974922 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.974939 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:33 crc kubenswrapper[5000]: I0105 21:34:33.974949 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:33Z","lastTransitionTime":"2026-01-05T21:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.077256 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.077299 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.077311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.077327 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.077338 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.179468 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.179503 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.179511 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.179527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.179538 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.282326 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.282382 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.282394 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.282411 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.282423 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.385134 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.385180 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.385189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.385205 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.385214 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.487831 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.487880 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.487922 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.487940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.487978 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.496846 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.498490 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-px9xc" event={"ID":"70ba1bce-8373-472e-a7bf-776eba738f1c","Type":"ContainerStarted","Data":"0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.498523 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-px9xc" event={"ID":"70ba1bce-8373-472e-a7bf-776eba738f1c","Type":"ContainerStarted","Data":"547e5ac27578bebf5f9d2bcff6ea9397033a51b212c2db85f5617bb54f1351d5"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.500990 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72" exitCode=0 Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.501029 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.512364 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.527958 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.542548 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.553559 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.571226 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.586349 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.591600 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.591641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.591653 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.591670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.591681 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.598680 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.624347 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.649322 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.668314 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.691352 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.694183 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.694234 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.694249 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.694271 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.694285 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.704510 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.720126 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.737658 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.754240 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.769749 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.782786 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.793106 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.797064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.797115 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.797129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.797147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.797157 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.805495 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.814427 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.831816 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.844732 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.855571 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.868414 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.883146 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.897755 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.899158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.899208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.899220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.899238 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.899250 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:34Z","lastTransitionTime":"2026-01-05T21:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.911731 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:34 crc kubenswrapper[5000]: I0105 21:34:34.928623 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:34Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.002876 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.002931 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.002940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.002954 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.002964 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.105711 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.105934 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.106042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.106205 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.106293 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.207991 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.208033 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.208042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.208057 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.208067 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.295007 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310185 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310376 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310481 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310509 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.310529 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.322970 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:35 crc kubenswrapper[5000]: E0105 21:34:35.323347 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.323044 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:35 crc kubenswrapper[5000]: E0105 21:34:35.323401 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.322996 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:35 crc kubenswrapper[5000]: E0105 21:34:35.323444 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.330914 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.342742 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.354981 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.366653 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.376681 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.385962 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.396887 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.404947 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.411998 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.412114 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.412174 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.412268 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.412346 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.418575 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.432883 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.443855 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.454096 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.469523 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.479081 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.489329 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.503645 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.506940 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496" exitCode=0 Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.506984 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.513789 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.513828 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.513837 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.513851 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.513861 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.516775 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.527688 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.543442 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.560194 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.574261 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.590560 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.608344 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.615881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.615932 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.615941 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.615957 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.615969 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.618788 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.628418 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.647822 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.659934 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.668938 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.686368 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.698633 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.709598 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.718343 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.718387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.718396 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.718415 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.718427 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.721679 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.734603 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.753352 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.768089 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.786096 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.802373 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820708 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820721 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820736 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820746 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.820786 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.839919 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.855216 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.869507 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:35Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.924167 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.924205 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.924214 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.924226 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:35 crc kubenswrapper[5000]: I0105 21:34:35.924238 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:35Z","lastTransitionTime":"2026-01-05T21:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.027358 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.027408 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.027419 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.027437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.027451 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.129573 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.129647 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.129660 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.129683 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.129697 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.217374 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.217453 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.217479 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.217509 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.217529 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.232946 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.238137 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.238195 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.238208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.238234 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.238248 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.253045 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.257676 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.257731 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.257750 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.257768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.257780 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.273375 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.277798 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.277833 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.277845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.277863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.277877 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.292520 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.296961 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.297009 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.297021 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.297040 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.297052 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.309778 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: E0105 21:34:36.309934 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.311942 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.312025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.312046 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.312089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.312105 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.414631 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.414684 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.414697 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.414716 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.414728 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.513328 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.513720 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.513800 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.521005 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.521039 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.521047 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.521060 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.521069 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.523763 5000 generic.go:334] "Generic (PLEG): container finished" podID="3199cfb3-5965-4ece-879d-2f49bd4c0976" containerID="eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf" exitCode=0 Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.523808 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerDied","Data":"eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.530371 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.538473 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.538827 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.540703 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.551066 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.562212 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.572693 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.581244 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.593108 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.603089 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.613853 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625376 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625597 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625614 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625639 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.625649 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.637713 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.647449 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.656012 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.671136 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.682417 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.695670 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.709297 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.723465 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.728059 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.728081 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.728089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.728101 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.728110 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.735104 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.747348 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.758379 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.767504 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.784227 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.795654 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.804730 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.817227 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.831409 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.831497 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.831508 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.831523 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.831532 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.852753 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.892284 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:36Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.933919 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.933969 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.933981 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.933999 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:36 crc kubenswrapper[5000]: I0105 21:34:36.934012 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:36Z","lastTransitionTime":"2026-01-05T21:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.036199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.036249 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.036259 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.036273 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.036282 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.139179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.139218 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.139229 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.139242 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.139253 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.242576 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.242624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.242640 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.242662 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.242682 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.323694 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.323742 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.323721 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:37 crc kubenswrapper[5000]: E0105 21:34:37.323850 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:37 crc kubenswrapper[5000]: E0105 21:34:37.323946 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:37 crc kubenswrapper[5000]: E0105 21:34:37.324074 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.346133 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.346181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.346192 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.346209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.346221 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.448674 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.448750 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.448770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.448850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.448871 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.535355 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" event={"ID":"3199cfb3-5965-4ece-879d-2f49bd4c0976","Type":"ContainerStarted","Data":"c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.535496 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.551409 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.551700 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.551950 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.552147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.552283 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.559356 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.580154 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.597380 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.620453 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.635629 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.652017 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.657552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.657591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.657600 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.657615 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.657630 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.668674 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.678719 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.690545 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.703490 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.727736 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.743082 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759406 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759779 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759803 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759811 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759825 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.759835 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.773639 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:37Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.861601 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.861631 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.861639 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.861652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.861660 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.964243 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.964270 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.964279 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.964291 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:37 crc kubenswrapper[5000]: I0105 21:34:37.964299 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:37Z","lastTransitionTime":"2026-01-05T21:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.067705 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.067746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.067755 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.067769 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.067783 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.171737 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.171793 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.171805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.171827 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.171840 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.274992 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.275031 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.275040 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.275058 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.275068 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.385426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.385497 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.385532 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.385572 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.385587 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.488287 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.488343 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.488354 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.488367 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.488376 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.538972 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.590161 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.590202 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.590213 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.590228 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.590239 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.692747 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.692790 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.692801 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.692816 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.692827 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.795808 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.796032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.796096 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.796691 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.796725 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.898859 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.898951 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.898967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.898991 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:38 crc kubenswrapper[5000]: I0105 21:34:38.899003 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:38Z","lastTransitionTime":"2026-01-05T21:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.001418 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.001448 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.001458 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.001471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.001488 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.103587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.103653 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.103665 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.103679 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.103691 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.140096 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.140212 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.140272 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.140348 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.140363 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.140383 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:34:55.140310932 +0000 UTC m=+50.096513401 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.140469 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:55.140459077 +0000 UTC m=+50.096661546 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.140489 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:55.140479857 +0000 UTC m=+50.096682326 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.206789 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.206830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.206842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.206861 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.206876 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.240913 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.240973 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241116 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241135 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241147 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241217 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:55.241174283 +0000 UTC m=+50.197376752 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241358 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241434 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241458 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.241578 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:55.241546343 +0000 UTC m=+50.197748852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.309365 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.309412 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.309422 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.309439 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.309450 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.322813 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.322867 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.322925 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.322998 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.323108 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:39 crc kubenswrapper[5000]: E0105 21:34:39.323205 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.412170 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.412209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.412221 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.412237 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.412248 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.514911 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.514980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.514990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.515007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.515017 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.543700 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/0.log" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.546316 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd" exitCode=1 Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.546354 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.546980 5000 scope.go:117] "RemoveContainer" containerID="d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.561353 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.573208 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.592095 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.604059 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.615775 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.617745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.617782 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.617794 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.617825 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.617842 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.628610 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.641379 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.656627 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.669582 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.684213 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.701185 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.718847 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.721329 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.721352 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.721363 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.721377 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.721389 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.732692 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.746087 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:39Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.825355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.825404 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.825413 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.825428 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.825439 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.928005 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.928080 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.928099 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.928128 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:39 crc kubenswrapper[5000]: I0105 21:34:39.928144 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:39Z","lastTransitionTime":"2026-01-05T21:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.030131 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.030179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.030189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.030204 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.030214 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.132953 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.133010 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.133023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.133046 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.133061 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.235658 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.235726 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.235747 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.235774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.235797 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.339359 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.339418 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.339437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.339463 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.339485 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.442134 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.442186 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.442196 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.442228 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.442240 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.544470 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.544513 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.544524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.544542 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.544552 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.551253 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/0.log" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.553484 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.553597 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.571517 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.583611 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.603849 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.625885 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.640618 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.647592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.647646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.647667 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.647694 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.647714 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.658645 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.671424 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.687259 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.702506 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.715628 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.728318 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.743446 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.751076 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.751152 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.751165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.751184 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.751198 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.757932 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.779464 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:40Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.854331 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.854409 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.854424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.854448 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.854464 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.957381 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.957422 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.957431 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.957447 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:40 crc kubenswrapper[5000]: I0105 21:34:40.957457 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:40Z","lastTransitionTime":"2026-01-05T21:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.059963 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.060006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.060018 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.060032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.060043 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.162516 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.162561 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.162572 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.162589 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.162599 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.265191 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.265233 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.265246 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.265261 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.265272 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.323030 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.323120 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.323030 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:41 crc kubenswrapper[5000]: E0105 21:34:41.323159 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:41 crc kubenswrapper[5000]: E0105 21:34:41.323249 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:41 crc kubenswrapper[5000]: E0105 21:34:41.323322 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.368564 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.368613 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.368628 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.368648 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.368663 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.470427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.470713 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.470819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.470969 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.471184 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.557857 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/1.log" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.558573 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/0.log" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.561691 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1" exitCode=1 Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.561719 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.561759 5000 scope.go:117] "RemoveContainer" containerID="d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.562415 5000 scope.go:117] "RemoveContainer" containerID="4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1" Jan 05 21:34:41 crc kubenswrapper[5000]: E0105 21:34:41.562569 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.574125 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.574169 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.574181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.574199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.574211 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.577617 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.592000 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.610798 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.621569 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.632465 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.650714 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.664095 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.677619 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.677662 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.677671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.677687 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.677697 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.680996 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.693564 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.710333 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.728521 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.741583 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.757569 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.773317 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:41Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.802363 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.802399 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.802410 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.802426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.802438 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.904589 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.904627 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.904638 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.904656 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:41 crc kubenswrapper[5000]: I0105 21:34:41.904667 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:41Z","lastTransitionTime":"2026-01-05T21:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.007001 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.007042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.007054 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.007069 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.007079 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.109941 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.109982 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.109996 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.110012 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.110036 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.213110 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.213369 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.213434 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.213504 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.213562 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.316186 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.316215 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.316225 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.316243 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.316254 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.418785 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.418874 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.418903 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.418916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.418925 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.484387 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7"] Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.485044 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.487146 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.487283 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.498937 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.509559 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.520905 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.520964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.520975 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.520991 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.521002 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.528614 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.540414 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.550248 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.562272 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.565940 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/1.log" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.570025 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmw4r\" (UniqueName: \"kubernetes.io/projected/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-kube-api-access-dmw4r\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.570227 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.570346 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.570443 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.573866 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.585831 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.597166 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.606017 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.617589 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.623162 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.623303 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.623401 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.623490 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.623571 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.631244 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.645797 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.657274 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.666581 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:42Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.671009 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmw4r\" (UniqueName: \"kubernetes.io/projected/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-kube-api-access-dmw4r\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.671081 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.671112 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.671151 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.671975 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.672057 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.676758 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.688367 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmw4r\" (UniqueName: \"kubernetes.io/projected/5478ab4e-c4bc-4871-92f9-d29d6d9486c8-kube-api-access-dmw4r\") pod \"ovnkube-control-plane-749d76644c-ckdm7\" (UID: \"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.726058 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.726126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.726145 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.726170 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.726186 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.798047 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" Jan 05 21:34:42 crc kubenswrapper[5000]: W0105 21:34:42.811960 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5478ab4e_c4bc_4871_92f9_d29d6d9486c8.slice/crio-192bd631e4fa0cb8a57d995937b92f37df828a8401656fad8c2d6ef2332ac50f WatchSource:0}: Error finding container 192bd631e4fa0cb8a57d995937b92f37df828a8401656fad8c2d6ef2332ac50f: Status 404 returned error can't find the container with id 192bd631e4fa0cb8a57d995937b92f37df828a8401656fad8c2d6ef2332ac50f Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.829249 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.829400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.829460 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.829547 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.829613 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.932831 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.932883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.932915 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.932936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:42 crc kubenswrapper[5000]: I0105 21:34:42.932951 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:42Z","lastTransitionTime":"2026-01-05T21:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.035398 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.035427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.035437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.035450 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.035460 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.138075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.138122 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.138135 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.138152 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.138164 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.240415 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.240453 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.240464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.240479 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.240489 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.323583 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.323614 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.323694 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.323708 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.324017 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.324053 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.342787 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.342814 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.342822 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.342836 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.342844 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.446066 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.446119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.446137 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.446158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.446176 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.549546 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.549591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.549605 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.549629 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.549644 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.575267 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" event={"ID":"5478ab4e-c4bc-4871-92f9-d29d6d9486c8","Type":"ContainerStarted","Data":"320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.575321 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" event={"ID":"5478ab4e-c4bc-4871-92f9-d29d6d9486c8","Type":"ContainerStarted","Data":"288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.575337 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" event={"ID":"5478ab4e-c4bc-4871-92f9-d29d6d9486c8","Type":"ContainerStarted","Data":"192bd631e4fa0cb8a57d995937b92f37df828a8401656fad8c2d6ef2332ac50f"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.590250 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gpwcw"] Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.591091 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.591199 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.605306 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.621611 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.642944 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.651613 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.651644 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.651652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.651665 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.651674 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.656952 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.673966 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.683768 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pk5s\" (UniqueName: \"kubernetes.io/projected/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-kube-api-access-8pk5s\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.684026 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.694965 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.721612 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.734240 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.746668 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.754644 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.754689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.754701 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.754717 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.754744 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.758332 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.768621 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.779915 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.784542 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.784628 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pk5s\" (UniqueName: \"kubernetes.io/projected/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-kube-api-access-8pk5s\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.784711 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:43 crc kubenswrapper[5000]: E0105 21:34:43.784777 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:44.284759862 +0000 UTC m=+39.240962321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.790775 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.801388 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.801877 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pk5s\" (UniqueName: \"kubernetes.io/projected/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-kube-api-access-8pk5s\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.813265 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.826617 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.837851 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.846578 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.857096 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.857133 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.857141 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.857158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.857170 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.858814 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.869442 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.879592 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.891457 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.900847 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.910515 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.927064 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.938971 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.948859 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.959481 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.959541 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.959550 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.959565 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.959575 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:43Z","lastTransitionTime":"2026-01-05T21:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.961844 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.972847 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.984745 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:43 crc kubenswrapper[5000]: I0105 21:34:43.995993 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:43Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.061875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.061941 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.061955 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.061980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.062011 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.164060 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.164133 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.164145 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.164193 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.164211 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.266437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.266502 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.266522 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.266544 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.266558 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.289952 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:44 crc kubenswrapper[5000]: E0105 21:34:44.290066 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:44 crc kubenswrapper[5000]: E0105 21:34:44.290107 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:45.290095132 +0000 UTC m=+40.246297601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.369400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.369447 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.369462 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.369482 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.369495 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.471845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.471978 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.472021 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.472053 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.472075 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.574852 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.574908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.574926 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.574943 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.574953 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.678484 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.678525 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.678533 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.678548 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.678557 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.782311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.782371 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.782388 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.782414 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.782430 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.884852 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.884943 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.884960 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.884979 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.884996 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.987876 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.988251 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.988306 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.988338 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:44 crc kubenswrapper[5000]: I0105 21:34:44.988359 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:44Z","lastTransitionTime":"2026-01-05T21:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.091698 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.091768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.091785 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.091812 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.091829 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.194056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.194138 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.194165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.194194 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.194217 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.296857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.296916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.296927 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.296943 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.296955 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.298380 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.298517 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.298569 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:47.298553561 +0000 UTC m=+42.254756040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.323321 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.323370 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.323327 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.323462 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.323589 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.323688 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.323716 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:45 crc kubenswrapper[5000]: E0105 21:34:45.323835 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.338212 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.348784 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.366015 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1898f8ec47f033c510647dc2490b8a74aeca698d817c4b87a5e4e339d72eebd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:39Z\\\",\\\"message\\\":\\\":140\\\\nI0105 21:34:38.889877 6299 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0105 21:34:38.890592 6299 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:38.890632 6299 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:34:38.890640 6299 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:34:38.890667 6299 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:34:38.890675 6299 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:38.890697 6299 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:38.890736 6299 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:38.890702 6299 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:34:38.890770 6299 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:38.890779 6299 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:38.890707 6299 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:34:38.890761 6299 factory.go:656] Stopping watch factory\\\\nI0105 21:34:38.890788 6299 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.376045 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.389515 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.399809 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.399853 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.399867 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.399910 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.399936 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.404015 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.423554 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.439159 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.453105 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.469719 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.484047 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.496605 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.502415 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.502474 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.502490 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.502511 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.502528 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.511637 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.523303 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.540594 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.553467 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:45Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.604864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.604942 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.604953 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.604967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.604978 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.707051 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.707100 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.707111 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.707123 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.707132 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.810086 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.810119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.810129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.810145 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.810155 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.913744 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.913793 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.913805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.913822 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:45 crc kubenswrapper[5000]: I0105 21:34:45.913833 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:45Z","lastTransitionTime":"2026-01-05T21:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.016168 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.016206 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.016223 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.016241 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.016252 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.118332 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.118400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.118417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.118437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.118449 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.220580 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.220629 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.220639 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.220654 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.220666 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.322827 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.322864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.322872 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.322899 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.322909 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.424783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.424838 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.424848 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.424864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.424874 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.437624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.437686 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.437698 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.437718 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.437728 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.448609 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:46Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.452534 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.452569 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.452577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.452588 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.452601 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.463018 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:46Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.466969 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.467027 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.467045 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.467071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.467093 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.480811 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:46Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.484303 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.484328 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.484336 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.484346 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.484355 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.499397 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:46Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.502996 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.503027 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.503040 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.503054 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.503062 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.519990 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:46Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:46 crc kubenswrapper[5000]: E0105 21:34:46.520105 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.527327 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.527365 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.527378 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.527395 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.527408 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.630132 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.630177 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.630187 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.630206 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.630223 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.732327 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.732362 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.732370 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.732384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.732393 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.834881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.834990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.835015 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.835055 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.835079 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.937399 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.937493 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.937559 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.937585 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:46 crc kubenswrapper[5000]: I0105 21:34:46.937603 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:46Z","lastTransitionTime":"2026-01-05T21:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.039638 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.039682 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.039690 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.039705 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.039716 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.142519 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.142587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.142602 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.142643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.142661 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.245402 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.245509 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.245533 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.245567 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.245589 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.318759 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.318967 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.319064 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:51.319042379 +0000 UTC m=+46.275244858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.324168 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.324180 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.324237 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.324399 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.324829 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.324928 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.325147 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:47 crc kubenswrapper[5000]: E0105 21:34:47.325241 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.347744 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.347799 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.347815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.347835 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.347847 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.450493 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.450525 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.450550 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.450565 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.450574 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.552624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.552670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.552679 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.552695 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.552705 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.655611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.655678 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.655701 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.655730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.655751 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.758756 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.758846 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.758864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.758884 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.758919 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.861544 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.861602 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.861619 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.861642 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.861660 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.964181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.964258 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.964280 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.964311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:47 crc kubenswrapper[5000]: I0105 21:34:47.964334 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:47Z","lastTransitionTime":"2026-01-05T21:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.067032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.067112 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.067139 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.067170 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.067192 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.170868 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.170928 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.170941 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.170960 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.170975 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.274107 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.274172 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.274188 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.274214 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.274229 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.377379 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.377430 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.377440 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.377456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.377466 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.479644 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.479790 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.480054 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.480087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.480132 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.582025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.582071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.582084 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.582097 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.582108 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.685226 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.685269 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.685284 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.685301 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.685311 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.787847 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.787976 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.787992 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.788013 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.788029 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.889850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.889883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.889904 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.889919 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.889928 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.993213 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.993277 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.993304 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.993353 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:48 crc kubenswrapper[5000]: I0105 21:34:48.993375 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:48Z","lastTransitionTime":"2026-01-05T21:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.096772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.096826 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.096836 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.096856 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.096867 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.199520 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.199592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.199603 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.199618 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.199627 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.301351 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.301392 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.301401 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.301415 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.301423 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.322835 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:49 crc kubenswrapper[5000]: E0105 21:34:49.322976 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.323068 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.323082 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:49 crc kubenswrapper[5000]: E0105 21:34:49.323151 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:49 crc kubenswrapper[5000]: E0105 21:34:49.323261 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.323274 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:49 crc kubenswrapper[5000]: E0105 21:34:49.323403 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.405000 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.405081 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.405101 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.405636 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.405711 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.509575 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.509639 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.509651 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.509777 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.509788 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.612801 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.612871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.612910 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.612932 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.612949 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.715788 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.715870 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.715924 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.715953 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.715973 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.818871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.819002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.819027 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.819055 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.819076 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.921026 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.921071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.921085 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.921100 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:49 crc kubenswrapper[5000]: I0105 21:34:49.921109 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:49Z","lastTransitionTime":"2026-01-05T21:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.023564 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.023885 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.024001 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.024125 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.024213 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.127177 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.127220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.127233 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.127250 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.127260 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.229730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.229771 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.229781 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.229796 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.229805 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.332155 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.332190 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.332199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.332211 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.332220 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.434026 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.434077 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.434089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.434106 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.434117 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.536599 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.536643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.536654 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.536670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.536682 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.639646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.639712 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.639736 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.639766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.639788 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.741556 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.741586 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.741597 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.741612 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.741626 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.844282 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.844318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.844327 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.844341 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.844350 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.946412 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.946470 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.946482 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.946500 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:50 crc kubenswrapper[5000]: I0105 21:34:50.946517 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:50Z","lastTransitionTime":"2026-01-05T21:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.049368 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.049417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.049429 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.049446 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.049458 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.152160 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.152383 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.152448 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.152516 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.152571 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.254940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.254976 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.255011 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.255025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.255033 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.323293 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.323313 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.323319 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.323638 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.323435 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.323425 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.323720 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.323796 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.357563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.357601 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.357610 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.357626 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.357636 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.358200 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.359569 5000 scope.go:117] "RemoveContainer" containerID="4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.365166 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.365276 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:51 crc kubenswrapper[5000]: E0105 21:34:51.365341 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:34:59.365325063 +0000 UTC m=+54.321527532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.376033 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.388408 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.397215 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.410071 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.421102 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.430556 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.440085 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.453649 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.459402 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.459430 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.459437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.459450 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.459458 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.465308 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.479388 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.491952 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.504144 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.514050 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.556981 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.561613 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.561646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.561656 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.561671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.561681 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.580664 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.589944 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.599747 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/1.log" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.602191 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.602551 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.616629 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.627542 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.638140 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.649022 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.659568 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.663853 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.663908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.663918 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.663933 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.663942 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.672960 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.694827 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.709053 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.724940 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.739808 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.752166 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.763923 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.765662 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.765689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.765698 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.765712 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.765731 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.773788 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.790312 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.801258 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.809693 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:51Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.868528 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.868562 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.868571 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.868584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.868594 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.971046 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.971081 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.971090 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.971107 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:51 crc kubenswrapper[5000]: I0105 21:34:51.971118 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:51Z","lastTransitionTime":"2026-01-05T21:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.073463 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.073526 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.073537 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.073552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.073565 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.176119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.176159 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.176180 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.176196 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.176207 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.278284 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.278326 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.278338 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.278354 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.278366 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.381701 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.381793 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.381811 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.382786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.382853 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.486008 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.486045 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.486056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.486071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.486082 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.588544 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.588586 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.588599 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.588616 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.588632 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.606945 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/2.log" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.607551 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/1.log" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.609943 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" exitCode=1 Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.609981 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.610013 5000 scope.go:117] "RemoveContainer" containerID="4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.610905 5000 scope.go:117] "RemoveContainer" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" Jan 05 21:34:52 crc kubenswrapper[5000]: E0105 21:34:52.611119 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.628440 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.639596 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.659943 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c9560d59c1b8f6cb42ed695db7b5a6d895e3163c93d9a49f7843afb65c6ddf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:40Z\\\",\\\"message\\\":\\\"ved *v1.EgressIP event handler 8\\\\nI0105 21:34:40.380745 6424 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:34:40.380782 6424 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:34:40.380830 6424 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.380917 6424 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:34:40.380943 6424 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0105 21:34:40.381000 6424 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0105 21:34:40.381062 6424 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0105 21:34:40.381049 6424 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:34:40.381088 6424 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0105 21:34:40.381195 6424 factory.go:656] Stopping watch factory\\\\nI0105 21:34:40.381237 6424 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0105 21:34:40.381097 6424 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0105 21:34:40.381304 6424 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0105 21:34:40.381352 6424 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21:34:40.381390 6424 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0105 21:34:40.381470 6424 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.672105 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.683720 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.691494 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.691544 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.691552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.691565 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.691575 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.696912 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.707450 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.719222 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.731636 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.742319 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.752318 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.765630 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.776539 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.787748 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.793981 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.794022 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.794035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.794051 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.794063 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.802502 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.812638 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.896170 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.896234 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.896253 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.896276 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.896293 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.999320 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.999370 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.999382 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.999399 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:52 crc kubenswrapper[5000]: I0105 21:34:52.999411 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:52Z","lastTransitionTime":"2026-01-05T21:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.102229 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.102281 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.102292 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.102309 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.102323 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.204390 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.204494 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.204516 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.204551 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.204572 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.307486 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.307542 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.307560 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.307584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.307601 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.323639 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.323727 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.323740 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.323741 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:53 crc kubenswrapper[5000]: E0105 21:34:53.323842 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:53 crc kubenswrapper[5000]: E0105 21:34:53.324187 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:53 crc kubenswrapper[5000]: E0105 21:34:53.324268 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:53 crc kubenswrapper[5000]: E0105 21:34:53.324360 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.412206 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.412266 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.412279 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.412298 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.412310 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.514669 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.514708 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.514720 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.514745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.514757 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.614255 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/2.log" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.616072 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.616095 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.616104 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.616118 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.616128 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.617151 5000 scope.go:117] "RemoveContainer" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" Jan 05 21:34:53 crc kubenswrapper[5000]: E0105 21:34:53.617312 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.629113 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.637762 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.645439 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.655597 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.665281 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.675891 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.685578 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.698475 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.710437 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.717753 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.717794 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.717805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.717820 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.717831 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.722768 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.732465 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.742845 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.750631 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.765334 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.777168 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.784657 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:53Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.820781 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.820820 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.820830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.820845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.820856 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.923645 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.923721 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.923747 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.923776 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:53 crc kubenswrapper[5000]: I0105 21:34:53.923798 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:53Z","lastTransitionTime":"2026-01-05T21:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.025603 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.025641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.025652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.025669 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.025678 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.127795 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.127842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.127856 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.127875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.127918 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.230958 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.230994 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.231004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.231018 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.231028 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.333061 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.333117 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.333129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.333144 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.333152 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.436473 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.436558 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.436573 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.436591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.436603 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.539283 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.539323 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.539334 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.539356 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.539375 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.642067 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.642111 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.642123 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.642140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.642150 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.744502 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.744552 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.744569 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.744587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.744598 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.847791 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.847829 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.847838 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.847851 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.847860 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.950483 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.950550 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.950567 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.950592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:54 crc kubenswrapper[5000]: I0105 21:34:54.950610 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:54Z","lastTransitionTime":"2026-01-05T21:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.053577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.053612 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.053624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.053642 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.053654 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.156672 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.156737 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.156772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.156789 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.156800 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.207191 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.207369 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:35:27.207343767 +0000 UTC m=+82.163546226 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.207463 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.207522 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.207646 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.207729 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:27.207707478 +0000 UTC m=+82.163909977 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.207659 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.207799 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:27.20778472 +0000 UTC m=+82.163987219 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.258766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.258804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.258817 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.258833 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.258846 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.308003 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.308053 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308181 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308200 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308215 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308262 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:27.308246519 +0000 UTC m=+82.264448988 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308272 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308310 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308329 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.308387 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:27.308368292 +0000 UTC m=+82.264570841 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.323267 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.323408 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.323942 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.324006 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.324056 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.324142 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.324393 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:55 crc kubenswrapper[5000]: E0105 21:34:55.324545 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.336773 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.347935 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.360427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.360480 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.360490 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.360505 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.360515 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.371034 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.382578 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.394518 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.406407 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.418696 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.431060 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.443928 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.456890 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.463370 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.463456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.463495 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.463521 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.463539 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.475065 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.485849 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.497667 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.508562 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.522821 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.535372 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:55Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.565212 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.565258 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.565278 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.565300 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.565314 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.668595 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.668634 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.668646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.668662 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.668673 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.770786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.770821 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.770830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.770869 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.770879 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.872745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.872779 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.872786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.872823 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.872833 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.975343 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.975459 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.975468 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.975480 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:55 crc kubenswrapper[5000]: I0105 21:34:55.975493 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:55Z","lastTransitionTime":"2026-01-05T21:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.077738 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.077804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.077819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.077833 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.077842 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.180359 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.180412 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.180424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.180442 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.180454 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.283520 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.283561 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.283571 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.283585 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.283595 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.386261 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.386311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.386323 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.386338 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.386351 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.489437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.490305 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.490334 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.490356 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.490370 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.592571 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.592869 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.592988 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.593165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.593247 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.651828 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.651872 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.651883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.651923 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.651933 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.664005 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:56Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.667302 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.667413 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.667422 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.667434 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.667443 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.679451 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:56Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.682617 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.682649 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.682659 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.682673 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.682683 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.693534 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:56Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.696748 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.696783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.696791 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.696805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.696814 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.710300 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:56Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.714212 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.714265 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.714279 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.714297 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.714310 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.727238 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:56Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:56 crc kubenswrapper[5000]: E0105 21:34:56.727391 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.729100 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.729140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.729157 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.729179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.729195 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.831658 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.831696 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.831706 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.831722 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.831732 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.935232 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.935293 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.935311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.935335 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:56 crc kubenswrapper[5000]: I0105 21:34:56.935353 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:56Z","lastTransitionTime":"2026-01-05T21:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.038137 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.038181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.038195 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.038210 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.038220 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.140840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.140875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.140883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.140899 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.140926 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.242602 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.242643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.242654 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.242668 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.242679 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.323185 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.323185 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.323208 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.323339 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:57 crc kubenswrapper[5000]: E0105 21:34:57.323488 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:57 crc kubenswrapper[5000]: E0105 21:34:57.323593 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:57 crc kubenswrapper[5000]: E0105 21:34:57.323646 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:57 crc kubenswrapper[5000]: E0105 21:34:57.323722 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.344870 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.344936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.344946 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.344962 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.344974 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.446853 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.446936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.446954 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.446970 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.446980 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.549321 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.549359 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.549367 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.549381 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.549390 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.651939 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.652020 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.652039 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.652058 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.652067 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.755310 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.755383 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.755400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.755426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.755447 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.858036 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.858075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.858087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.858103 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.858114 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.961123 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.961153 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.961163 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.961178 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:57 crc kubenswrapper[5000]: I0105 21:34:57.961188 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:57Z","lastTransitionTime":"2026-01-05T21:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.063789 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.063834 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.063845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.063861 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.063873 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.166573 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.166610 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.166621 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.166637 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.166647 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.268702 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.268752 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.268763 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.268780 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.268792 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.297815 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.307517 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.315569 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.329026 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.340069 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.350392 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.364737 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.371159 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.371196 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.371206 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.371221 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.371233 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.374894 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.387442 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.401912 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.415881 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.428763 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.446784 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.461697 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.473730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.473780 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.473794 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.473815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.473829 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.476018 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.487248 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.500657 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.515292 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:58Z is after 2025-08-24T17:21:41Z" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.576570 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.576604 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.576615 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.576630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.576640 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.679549 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.679676 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.679690 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.679708 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.679720 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.782670 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.782727 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.782739 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.782764 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.782776 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.885725 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.885772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.885783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.885801 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.885815 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.988126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.988163 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.988172 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.988185 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:58 crc kubenswrapper[5000]: I0105 21:34:58.988194 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:58Z","lastTransitionTime":"2026-01-05T21:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.091058 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.091100 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.091110 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.091129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.091146 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.193582 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.193628 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.193641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.193655 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.193665 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.295802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.295850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.295860 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.295878 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.295908 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.323454 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.323489 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.323606 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.323627 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.323737 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.323788 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.323985 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.324047 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.398368 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.398821 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.398963 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.399063 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.399268 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.452914 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.453071 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:59 crc kubenswrapper[5000]: E0105 21:34:59.453361 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:15.453335956 +0000 UTC m=+70.409538475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.502329 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.502399 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.502415 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.502441 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.502453 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.605249 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.605584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.605655 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.605723 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.605824 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.708305 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.708515 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.708654 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.708789 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.708865 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.811168 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.811209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.811219 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.811233 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.811242 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.913002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.913030 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.913037 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.913048 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:34:59 crc kubenswrapper[5000]: I0105 21:34:59.913057 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:34:59Z","lastTransitionTime":"2026-01-05T21:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.015113 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.015344 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.015427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.015551 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.015636 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.119714 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.119757 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.119769 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.119786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.119799 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.222052 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.222100 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.222119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.222142 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.222162 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.324236 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.324493 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.324584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.324689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.324768 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.427327 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.427353 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.427364 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.427379 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.427392 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.530081 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.530485 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.530577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.530662 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.530725 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.633967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.634310 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.634392 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.634471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.634591 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.737384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.737738 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.737828 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.737930 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.738034 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.840245 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.840487 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.840620 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.840741 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.840823 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.942563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.942600 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.942611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.942627 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:00 crc kubenswrapper[5000]: I0105 21:35:00.942639 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:00Z","lastTransitionTime":"2026-01-05T21:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.044977 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.045031 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.045047 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.045067 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.045080 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.148430 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.148493 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.148502 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.148517 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.148526 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.250941 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.251693 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.251791 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.251885 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.252001 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.323645 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.323742 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:01 crc kubenswrapper[5000]: E0105 21:35:01.323789 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.323654 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:01 crc kubenswrapper[5000]: E0105 21:35:01.323841 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.323664 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:01 crc kubenswrapper[5000]: E0105 21:35:01.323868 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:01 crc kubenswrapper[5000]: E0105 21:35:01.323938 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.354805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.354835 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.354843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.354855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.354865 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.457663 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.457700 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.457708 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.457722 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.457730 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.560476 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.560516 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.560527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.560542 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.560553 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.663042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.663084 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.663092 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.663105 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.663113 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.765804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.765850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.765864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.765878 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.765949 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.869347 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.869426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.869473 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.869499 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.869516 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.971387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.971427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.971438 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.971453 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:01 crc kubenswrapper[5000]: I0105 21:35:01.971464 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:01Z","lastTransitionTime":"2026-01-05T21:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.073906 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.073982 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.073993 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.074007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.074017 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.176875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.176944 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.176954 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.176969 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.176980 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.279471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.279517 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.279526 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.279541 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.279552 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.383622 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.383667 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.383676 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.383695 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.383706 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.486330 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.486757 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.486820 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.486916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.487003 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.589809 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.589848 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.589857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.589871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.589883 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.692969 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.693014 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.693024 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.693040 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.693051 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.795871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.795950 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.795965 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.795986 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.795998 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.897942 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.897999 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.898007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.898020 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:02 crc kubenswrapper[5000]: I0105 21:35:02.898029 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:02Z","lastTransitionTime":"2026-01-05T21:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.000488 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.000544 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.000554 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.000575 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.000587 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.102651 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.102699 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.102710 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.102727 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.102739 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.205805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.205849 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.205857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.205872 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.205881 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.308321 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.308388 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.308400 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.308416 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.308427 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.323690 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:03 crc kubenswrapper[5000]: E0105 21:35:03.323789 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.323957 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.324099 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:03 crc kubenswrapper[5000]: E0105 21:35:03.324116 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:03 crc kubenswrapper[5000]: E0105 21:35:03.324316 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.324526 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:03 crc kubenswrapper[5000]: E0105 21:35:03.324730 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.412110 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.412158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.412169 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.412185 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.412196 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.514699 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.514731 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.514741 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.514754 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.514765 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.617208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.617443 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.617536 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.617625 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.617699 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.720384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.720611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.720717 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.720787 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.720849 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.823967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.823997 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.824004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.824016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.824026 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.926827 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.926902 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.926910 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.926928 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:03 crc kubenswrapper[5000]: I0105 21:35:03.926937 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:03Z","lastTransitionTime":"2026-01-05T21:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.030711 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.030805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.030822 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.030845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.030862 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.135318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.135678 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.135692 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.135707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.135719 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.238770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.238844 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.238869 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.238929 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.238953 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.342250 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.342967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.343157 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.343247 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.343328 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.446680 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.446748 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.446758 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.446772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.446782 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.549075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.549126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.549141 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.549162 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.549177 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.652444 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.652479 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.652490 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.652505 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.652517 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.754864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.754925 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.754936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.754949 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.754958 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.856792 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.856826 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.856838 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.856855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.856867 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.959470 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.959513 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.959548 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.959566 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:04 crc kubenswrapper[5000]: I0105 21:35:04.959578 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:04Z","lastTransitionTime":"2026-01-05T21:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.062489 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.062580 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.062599 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.062628 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.062652 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.164529 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.164567 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.164577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.164592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.164601 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.266329 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.266360 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.266369 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.266384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.266402 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.322941 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.322965 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.322965 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.323012 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:05 crc kubenswrapper[5000]: E0105 21:35:05.324067 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:05 crc kubenswrapper[5000]: E0105 21:35:05.324138 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:05 crc kubenswrapper[5000]: E0105 21:35:05.324229 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:05 crc kubenswrapper[5000]: E0105 21:35:05.324293 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.336411 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.346779 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.356839 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.366926 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.368406 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.368458 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.368469 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.368483 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.368515 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.377359 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.389190 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.400109 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.412954 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.426673 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.439449 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.450637 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.462210 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471319 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471395 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471405 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471419 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471428 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.471733 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.481646 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.496680 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.506841 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.515629 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:05Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.573583 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.573630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.573641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.573658 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.573670 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.675124 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.675228 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.675237 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.675256 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.675264 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.777858 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.777891 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.777913 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.777925 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.777934 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.879962 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.880000 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.880008 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.880021 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.880032 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.982017 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.982063 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.982074 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.982095 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:05 crc kubenswrapper[5000]: I0105 21:35:05.982105 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:05Z","lastTransitionTime":"2026-01-05T21:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.084238 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.084318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.084330 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.084346 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.084361 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.185987 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.186025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.186033 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.186046 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.186055 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.290056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.290102 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.290119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.290144 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.290156 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.392477 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.392527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.392538 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.392556 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.392567 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.495211 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.495254 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.495265 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.495281 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.495292 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.597524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.597562 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.597574 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.597589 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.597600 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.701022 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.701068 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.701080 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.701095 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.701106 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.804441 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.804805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.804873 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.804972 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.805077 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.907258 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.907556 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.907685 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.907820 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:06 crc kubenswrapper[5000]: I0105 21:35:06.907970 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:06Z","lastTransitionTime":"2026-01-05T21:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.012017 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.012060 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.012072 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.012091 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.012111 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.077512 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.077961 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.078140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.078287 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.078422 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.098847 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.103665 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.103940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.104131 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.104283 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.104413 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.122966 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.127449 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.127550 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.127563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.127591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.127604 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.139089 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.144226 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.144267 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.144276 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.144292 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.144307 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.156930 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.161709 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.161756 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.161766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.161784 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.161796 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.172087 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:07Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.172193 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.173807 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.173839 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.173849 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.173865 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.173876 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.276863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.277006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.277030 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.277064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.277088 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.323477 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.323661 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.323662 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.324151 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.323869 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.324299 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.323851 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:07 crc kubenswrapper[5000]: E0105 21:35:07.324476 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.380818 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.380869 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.380878 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.380914 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.380931 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.484257 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.484354 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.484384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.484426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.484454 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.587876 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.587980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.588003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.588033 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.588054 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.691060 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.691135 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.691158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.691187 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.691207 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.793664 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.793720 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.793732 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.793751 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.793763 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.896003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.896045 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.896056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.896074 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.896087 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.999055 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.999093 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.999101 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.999117 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:07 crc kubenswrapper[5000]: I0105 21:35:07.999127 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:07Z","lastTransitionTime":"2026-01-05T21:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.101574 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.101646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.101660 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.101683 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.101695 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.204108 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.204143 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.204152 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.204164 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.204173 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.306621 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.306650 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.306660 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.306674 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.306684 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.323630 5000 scope.go:117] "RemoveContainer" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" Jan 05 21:35:08 crc kubenswrapper[5000]: E0105 21:35:08.323906 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.409843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.409882 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.410003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.410023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.410032 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.512825 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.512870 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.512883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.512931 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.512943 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.615063 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.615106 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.615136 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.615156 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.615170 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.717311 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.717336 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.717344 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.717364 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.717381 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.819604 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.819630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.819638 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.819650 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.819660 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.921583 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.921648 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.921667 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.921689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:08 crc kubenswrapper[5000]: I0105 21:35:08.921706 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:08Z","lastTransitionTime":"2026-01-05T21:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.023824 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.023877 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.023908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.023928 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.023943 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.126536 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.126587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.126601 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.126618 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.126628 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.229175 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.229225 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.229236 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.229254 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.229267 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.323323 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.323396 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.323434 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:09 crc kubenswrapper[5000]: E0105 21:35:09.323475 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.323535 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:09 crc kubenswrapper[5000]: E0105 21:35:09.323655 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:09 crc kubenswrapper[5000]: E0105 21:35:09.323703 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:09 crc kubenswrapper[5000]: E0105 21:35:09.323782 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.330835 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.330916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.330925 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.330937 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.330948 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.433103 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.433168 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.433186 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.433208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.433223 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.535844 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.536012 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.536023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.536035 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.536043 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.637947 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.637980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.637991 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.638006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.638019 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.741269 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.741318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.741330 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.741357 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.741371 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.845589 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.845647 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.845658 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.845675 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.845688 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.947968 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.948012 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.948019 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.948034 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:09 crc kubenswrapper[5000]: I0105 21:35:09.948043 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:09Z","lastTransitionTime":"2026-01-05T21:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.049623 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.049677 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.049692 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.049709 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.049726 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.152269 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.152315 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.152326 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.152346 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.152356 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.255149 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.255181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.255189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.255203 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.255214 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.357215 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.357254 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.357263 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.357275 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.357285 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.459945 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.459996 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.460009 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.460030 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.460058 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.562393 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.562424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.562432 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.562444 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.562454 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.666515 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.666561 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.666574 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.666591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.666602 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.769230 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.769290 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.769304 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.769318 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.769328 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.871147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.871189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.871200 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.871217 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.871236 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.972924 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.972957 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.972968 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.972982 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:10 crc kubenswrapper[5000]: I0105 21:35:10.972992 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:10Z","lastTransitionTime":"2026-01-05T21:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.075641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.075686 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.075699 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.075712 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.075722 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.178202 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.178240 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.178251 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.178267 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.178277 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.280950 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.281016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.281029 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.281043 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.281052 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.323625 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.323662 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.323627 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:11 crc kubenswrapper[5000]: E0105 21:35:11.323804 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.323662 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:11 crc kubenswrapper[5000]: E0105 21:35:11.324020 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:11 crc kubenswrapper[5000]: E0105 21:35:11.324067 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:11 crc kubenswrapper[5000]: E0105 21:35:11.324152 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.383804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.383848 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.383857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.383871 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.383880 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.486529 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.486573 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.486584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.486598 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.486607 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.589129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.589168 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.589179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.589197 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.589209 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.691650 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.691692 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.691705 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.691722 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.691736 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.794028 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.794067 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.794077 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.794091 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.794100 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.896480 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.896546 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.896566 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.896590 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.896608 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.999417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.999461 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.999471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.999506 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:11 crc kubenswrapper[5000]: I0105 21:35:11.999519 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:11Z","lastTransitionTime":"2026-01-05T21:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.101773 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.101817 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.101828 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.101844 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.101858 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.203860 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.203912 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.203920 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.203933 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.203942 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.305536 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.305581 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.305591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.305606 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.305615 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.407520 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.407578 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.407592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.407612 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.407626 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.510005 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.510045 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.510054 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.510069 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.510078 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.612626 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.612667 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.612675 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.612689 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.612698 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.714765 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.714816 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.714828 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.714843 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.714855 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.817286 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.817322 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.817357 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.817374 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.817383 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.919538 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.919584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.919595 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.919611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:12 crc kubenswrapper[5000]: I0105 21:35:12.919624 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:12Z","lastTransitionTime":"2026-01-05T21:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.022269 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.022310 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.022319 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.022333 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.022344 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.124159 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.124193 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.124202 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.124215 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.124223 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.226653 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.226690 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.226708 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.226724 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.226736 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.323167 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.323248 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.323304 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:13 crc kubenswrapper[5000]: E0105 21:35:13.323311 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:13 crc kubenswrapper[5000]: E0105 21:35:13.323396 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.323404 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:13 crc kubenswrapper[5000]: E0105 21:35:13.323472 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:13 crc kubenswrapper[5000]: E0105 21:35:13.323518 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.329052 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.329080 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.329089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.329099 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.329109 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.431522 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.431547 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.431554 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.431567 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.431575 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.533424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.533460 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.533472 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.533487 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.533497 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.635772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.635823 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.635837 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.635855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.635867 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.737722 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.737774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.737786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.737803 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.737815 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.840978 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.841036 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.841052 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.841075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.841092 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.943315 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.943374 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.943391 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.943413 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:13 crc kubenswrapper[5000]: I0105 21:35:13.943431 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:13Z","lastTransitionTime":"2026-01-05T21:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.045935 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.045982 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.045994 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.046010 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.046050 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.147813 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.147844 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.147873 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.147908 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.147917 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.249554 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.249589 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.249599 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.249612 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.249622 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.352155 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.352196 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.352207 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.352219 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.352228 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.454345 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.454381 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.454390 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.454409 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.454431 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.557464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.557524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.557541 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.557558 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.557568 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.660360 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.660408 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.660420 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.660438 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.660474 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.762418 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.762457 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.762471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.762489 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.762499 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.865576 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.865625 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.865636 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.865653 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.865666 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.967724 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.967766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.967780 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.967798 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:14 crc kubenswrapper[5000]: I0105 21:35:14.967811 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:14Z","lastTransitionTime":"2026-01-05T21:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.069918 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.069967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.069978 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.069994 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.070009 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.171912 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.171955 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.171964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.171981 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.171992 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.274465 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.274516 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.274526 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.274540 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.274549 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.323306 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.323358 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.323363 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.323326 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.323437 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.323520 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.323592 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.323661 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.340185 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.352257 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.363581 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.375583 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.376690 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.376724 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.376738 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.376752 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.376761 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.385924 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.398320 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.410083 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.418652 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.432002 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.443790 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.455544 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.468539 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.479757 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.479795 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.479803 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.479816 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.479827 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.480155 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.489917 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.501092 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.506766 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.506941 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:35:15 crc kubenswrapper[5000]: E0105 21:35:15.506997 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:35:47.506981921 +0000 UTC m=+102.463184390 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.510214 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.527033 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:15Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.582820 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.582858 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.582868 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.582883 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.582922 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.685027 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.685064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.685119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.685136 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.685148 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.788003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.788070 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.788087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.788109 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.788126 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.890250 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.890406 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.890429 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.890447 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.890460 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.992841 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.992880 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.992904 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.992919 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:15 crc kubenswrapper[5000]: I0105 21:35:15.992929 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:15Z","lastTransitionTime":"2026-01-05T21:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.095365 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.095403 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.095412 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.095426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.095436 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.197735 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.197774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.197787 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.197802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.197813 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.299507 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.299582 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.299593 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.299609 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.299619 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.402174 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.402210 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.402218 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.402230 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.402239 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.504605 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.504661 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.504671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.504684 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.504694 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.608073 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.608158 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.608185 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.608214 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.608244 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.685291 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/0.log" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.685339 5000 generic.go:334] "Generic (PLEG): container finished" podID="c10b7118-eb24-495a-bb8f-bc46a3c38799" containerID="0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7" exitCode=1 Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.685367 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerDied","Data":"0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.685692 5000 scope.go:117] "RemoveContainer" containerID="0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.700096 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710119 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710679 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710749 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710764 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.710793 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.720224 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.732676 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.744851 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.755163 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.765638 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.782227 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.791439 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.803174 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.813384 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.813418 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.813428 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.813442 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.813453 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.815211 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.825843 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.837249 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.852949 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.867686 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.881436 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.892804 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:16Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.915468 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.915507 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.915518 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.915533 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:16 crc kubenswrapper[5000]: I0105 21:35:16.915545 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:16Z","lastTransitionTime":"2026-01-05T21:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.017477 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.017513 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.017524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.017539 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.017550 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.119456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.119497 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.119505 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.119520 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.119529 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.220161 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.220199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.220209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.220225 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.220236 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.232693 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.236235 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.236271 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.236280 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.236295 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.236305 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.246714 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.249856 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.249902 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.249916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.249937 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.249947 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.260530 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.264069 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.264106 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.264118 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.264134 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.264147 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.274348 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.277387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.277419 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.277429 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.277444 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.277455 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.289665 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.289853 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.291175 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.291210 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.291220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.291235 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.291246 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.323048 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.323083 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.323048 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.323170 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.323243 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.323308 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.323341 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:17 crc kubenswrapper[5000]: E0105 21:35:17.323409 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.393387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.393426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.393435 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.393448 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.393457 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.495909 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.495993 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.496006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.496023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.496036 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.598177 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.598207 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.598216 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.598230 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.598252 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.690245 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/0.log" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.690310 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerStarted","Data":"d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.700745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.700799 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.700829 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.700850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.700865 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.705936 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.716703 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.730455 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.744368 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.754422 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.765283 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.775862 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.786686 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.795623 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.803990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.804032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.804045 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.804064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.804075 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.808106 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.817054 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.832991 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.843397 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.855924 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.869248 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.879120 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.888738 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:17Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.906476 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.906518 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.906551 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.906568 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:17 crc kubenswrapper[5000]: I0105 21:35:17.906581 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:17Z","lastTransitionTime":"2026-01-05T21:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.008419 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.008467 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.008478 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.008495 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.008508 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.110351 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.110396 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.110409 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.110424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.110435 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.212016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.212059 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.212072 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.212084 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.212092 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.314215 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.314273 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.314291 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.314320 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.314342 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.417948 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.417979 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.417990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.418006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.418017 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.521283 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.521343 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.521355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.521371 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.521382 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.623840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.623879 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.623912 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.623928 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.623938 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.725868 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.725929 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.725938 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.725952 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.725961 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.828047 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.828086 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.828098 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.828114 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.828128 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.930443 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.930478 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.930488 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.930502 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:18 crc kubenswrapper[5000]: I0105 21:35:18.930513 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:18Z","lastTransitionTime":"2026-01-05T21:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.033320 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.033577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.033654 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.033752 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.033838 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.136703 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.136750 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.136762 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.136777 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.136788 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.239733 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.239772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.239785 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.239802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.239815 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.322789 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.322856 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.322921 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.323023 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:19 crc kubenswrapper[5000]: E0105 21:35:19.323128 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:19 crc kubenswrapper[5000]: E0105 21:35:19.323318 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:19 crc kubenswrapper[5000]: E0105 21:35:19.323390 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:19 crc kubenswrapper[5000]: E0105 21:35:19.323469 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.341664 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.341715 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.341730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.341749 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.341762 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.444692 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.444748 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.444764 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.444787 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.444798 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.547063 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.547130 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.547164 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.547184 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.547196 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.649367 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.649417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.649431 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.649450 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.649464 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.750940 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.750980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.750990 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.751003 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.751012 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.852823 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.852870 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.852879 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.852907 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.852918 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.955140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.955183 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.955193 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.955208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:19 crc kubenswrapper[5000]: I0105 21:35:19.955219 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:19Z","lastTransitionTime":"2026-01-05T21:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.057865 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.057955 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.057968 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.057986 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.057999 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.161077 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.161117 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.161126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.161140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.161148 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.263702 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.263751 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.263759 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.263774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.263788 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.366431 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.366477 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.366490 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.366507 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.366519 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.468521 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.468581 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.468593 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.468610 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.468622 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.570257 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.570287 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.570296 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.570308 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.570316 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.673564 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.673606 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.673618 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.673636 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.673650 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.777138 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.777189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.777201 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.777220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.777234 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.880046 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.880092 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.880105 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.880129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.880146 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.982819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.983148 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.983272 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.983443 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:20 crc kubenswrapper[5000]: I0105 21:35:20.983579 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:20Z","lastTransitionTime":"2026-01-05T21:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.086057 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.086106 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.086123 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.086203 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.086220 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.189220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.189253 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.189266 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.189286 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.189303 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.292921 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.293232 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.293390 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.293538 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.293661 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.323580 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.323711 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:21 crc kubenswrapper[5000]: E0105 21:35:21.323792 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:21 crc kubenswrapper[5000]: E0105 21:35:21.323935 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.324011 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.323623 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:21 crc kubenswrapper[5000]: E0105 21:35:21.324226 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:21 crc kubenswrapper[5000]: E0105 21:35:21.324579 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.396421 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.396472 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.396484 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.396503 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.396517 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.500018 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.500115 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.500129 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.500184 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.500204 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.603618 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.603672 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.603691 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.603713 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.603730 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.706062 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.706120 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.706143 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.706171 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.706194 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.809987 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.810042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.810061 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.810094 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.810111 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.913307 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.913355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.913372 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.913395 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:21 crc kubenswrapper[5000]: I0105 21:35:21.913416 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:21Z","lastTransitionTime":"2026-01-05T21:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.016267 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.016305 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.016319 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.016339 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.016353 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.119120 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.119185 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.119209 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.119270 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.119294 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.221574 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.221635 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.221653 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.221677 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.221693 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.324217 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.324248 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.324257 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.324296 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.324304 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.426981 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.427012 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.427020 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.427032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.427060 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.530199 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.530236 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.530260 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.530276 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.530285 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.633240 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.633295 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.633312 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.633334 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.633352 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.736179 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.736563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.736711 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.736841 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.737044 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.840286 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.840354 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.840377 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.840405 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.840427 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.943757 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.943837 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.943861 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.943921 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:22 crc kubenswrapper[5000]: I0105 21:35:22.943950 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:22Z","lastTransitionTime":"2026-01-05T21:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.047234 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.047303 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.047321 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.047344 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.047361 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.150830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.150937 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.150964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.150995 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.151018 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.254455 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.254840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.255074 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.255307 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.255515 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.323316 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.323373 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:23 crc kubenswrapper[5000]: E0105 21:35:23.323993 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.323426 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:23 crc kubenswrapper[5000]: E0105 21:35:23.324080 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.323408 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:23 crc kubenswrapper[5000]: E0105 21:35:23.324152 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.324323 5000 scope.go:117] "RemoveContainer" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" Jan 05 21:35:23 crc kubenswrapper[5000]: E0105 21:35:23.323880 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.357294 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.357329 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.357341 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.357355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.357366 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.459824 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.459863 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.459877 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.459910 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.459923 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.561833 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.561866 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.561875 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.561902 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.561913 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.664806 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.664879 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.665000 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.665025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.665072 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.713659 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/2.log" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.717556 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.717991 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.742600 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.755516 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.768786 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.768967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.769023 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.769063 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.769077 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.776478 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.791118 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.805850 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.816202 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.829227 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.838598 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.848552 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.869198 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.870967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.871004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.871016 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.871031 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.871042 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.887447 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.901371 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.914217 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.928153 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.942510 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.957331 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.966990 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:23Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.973397 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.973437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.973456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.973499 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:23 crc kubenswrapper[5000]: I0105 21:35:23.973514 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:23Z","lastTransitionTime":"2026-01-05T21:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.075683 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.075733 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.075745 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.075763 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.075774 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.178288 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.178325 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.178336 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.178350 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.178362 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.281626 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.281679 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.281697 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.281719 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.281736 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.384567 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.384604 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.384615 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.384630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.384639 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.487818 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.487867 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.487878 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.487916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.487927 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.590563 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.590619 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.590632 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.590652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.590664 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.693390 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.693434 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.693447 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.693464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.693477 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.722041 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/3.log" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.722697 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/2.log" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.725384 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" exitCode=1 Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.725423 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.725455 5000 scope.go:117] "RemoveContainer" containerID="801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.726017 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:35:24 crc kubenswrapper[5000]: E0105 21:35:24.726167 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.742803 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.756577 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.765991 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.779696 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.791502 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.795651 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.795717 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.795730 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.795748 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.795783 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.803965 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.815695 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.827974 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.839873 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.855724 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.868222 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.891527 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:24Z\\\",\\\"message\\\":\\\" 7064 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318749 7064 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318980 7064 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:35:24.318996 7064 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:35:24.319011 7064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:35:24.319015 7064 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:35:24.319026 7064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:35:24.319031 7064 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:35:24.319041 7064 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:35:24.319047 7064 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:35:24.319062 7064 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:35:24.319102 7064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:35:24.319103 7064 factory.go:656] Stopping watch factory\\\\nI0105 21:35:24.319118 7064 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:35:24.319133 7064 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.897846 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.897906 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.897914 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.897930 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.897940 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:24Z","lastTransitionTime":"2026-01-05T21:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.906811 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.924515 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.938494 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.950795 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:24 crc kubenswrapper[5000]: I0105 21:35:24.962165 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:24Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.000511 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.000565 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.000579 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.000604 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.000629 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.103075 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.103114 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.103123 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.103138 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.103148 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.205312 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.205347 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.205355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.205370 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.205379 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.308403 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.308460 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.308481 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.308509 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.308554 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.323209 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:25 crc kubenswrapper[5000]: E0105 21:35:25.323313 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.323466 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.323518 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:25 crc kubenswrapper[5000]: E0105 21:35:25.323600 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.323642 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:25 crc kubenswrapper[5000]: E0105 21:35:25.323789 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:25 crc kubenswrapper[5000]: E0105 21:35:25.323935 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.338693 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.350120 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.361790 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.375361 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.385142 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.396320 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.406570 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.410246 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.410268 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.410276 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.410288 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.410296 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.416213 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.429329 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.441140 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.451858 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.461920 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.472927 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.482083 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.491996 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.500528 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.512452 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.512483 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.512495 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.512510 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.512521 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.517100 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801c4f9563d6e8af8f62c5ab8d3d58214b2985c244e1266a12040f6fdc07b2d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:34:52Z\\\",\\\"message\\\":\\\"d to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:34:52Z is after 2025-08-24T17:21:41Z]\\\\nI0105 21:34:52.075012 6641 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 in node crc\\\\nI0105 21:34:52.075016 6641 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0105 21:34:52.075019 6641 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7 after 0 failed attempt(s)\\\\nI0105 21:34:52.075025 6641 default_network_controller.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:24Z\\\",\\\"message\\\":\\\" 7064 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318749 7064 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318980 7064 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:35:24.318996 7064 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:35:24.319011 7064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:35:24.319015 7064 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:35:24.319026 7064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:35:24.319031 7064 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:35:24.319041 7064 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:35:24.319047 7064 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:35:24.319062 7064 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:35:24.319102 7064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:35:24.319103 7064 factory.go:656] Stopping watch factory\\\\nI0105 21:35:24.319118 7064 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:35:24.319133 7064 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.614716 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.614746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.614771 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.614809 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.614820 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.716727 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.716779 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.716788 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.716802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.716813 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.729882 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/3.log" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.736415 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:35:25 crc kubenswrapper[5000]: E0105 21:35:25.737265 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.754307 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aea814b4dc206142dc2421893d7f626d9460d8f55465f79280c74f55f80b1816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62b6a9700e5f29dcab7662d1500bc11df5bcf6e07b3ebab4b136daa376f77c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.763412 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7r7z6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a481902-8b99-488e-b5b9-5fbc3800a0c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://405ba256910bb2b496a179a36bf03fb0503b16ff784ac814f84c52da9285b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngq2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7r7z6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.788849 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1406b03-70e6-4874-8cfe-5991e43cc720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:24Z\\\",\\\"message\\\":\\\" 7064 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318749 7064 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0105 21:35:24.318980 7064 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0105 21:35:24.318996 7064 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0105 21:35:24.319011 7064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0105 21:35:24.319015 7064 handler.go:208] Removed *v1.Node event handler 2\\\\nI0105 21:35:24.319026 7064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0105 21:35:24.319031 7064 handler.go:208] Removed *v1.Node event handler 7\\\\nI0105 21:35:24.319041 7064 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0105 21:35:24.319047 7064 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0105 21:35:24.319062 7064 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0105 21:35:24.319102 7064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0105 21:35:24.319103 7064 factory.go:656] Stopping watch factory\\\\nI0105 21:35:24.319118 7064 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0105 21:35:24.319133 7064 ovnkube.go:599] Stopped ovnkube\\\\nI0105 21\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:35:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2h8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f5k4c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.803399 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.816433 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5aca1d9c6705572523aa1b62d4c7419305b3ad01d548460b35dad8c94d0a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.819880 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.819963 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.819980 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.820004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.820016 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.833774 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.846495 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5478ab4e-c4bc-4871-92f9-d29d6d9486c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://288213707ce56c2aebf06392be656dbd9f0cf6a158ffaa88fead927b601dae86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320dbec31778c9229fbb04d05bf98b3f6608be3b07ae98f6c983ade0bd3f2149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmw4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ckdm7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.858093 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8pk5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gpwcw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.876579 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde19f36-8816-4b31-a711-82b9d90f0855\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0105 21:34:07.694431 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0105 21:34:07.697181 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1017768786/tls.crt::/tmp/serving-cert-1017768786/tls.key\\\\\\\"\\\\nI0105 21:34:22.851882 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0105 21:34:22.858528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0105 21:34:22.858566 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0105 21:34:22.858616 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0105 21:34:22.858626 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0105 21:34:22.869701 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0105 21:34:22.869728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869732 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0105 21:34:22.869736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0105 21:34:22.869738 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0105 21:34:22.869742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0105 21:34:22.869744 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0105 21:34:22.870020 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0105 21:34:22.872912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.891979 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92079495-eb3a-4e19-8186-77cc6f930d99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2073917d03ea0e05c56588c3bc6502181fce7e04c184f6cb31ec2555dd65382f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55bc29ae86478b0f91dca82e826b6326cafbe48a0414ad105d88c061862344e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427e5492d384da519f8242de4a1a0625ea6709fc19745e838dd98c4794178738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.905916 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e7d3ef9-ed44-43ac-826a-1b5606c8487b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c79690fbc0802b27c14d2561e08fef4f2273c61e179ce3af1cf20f800082bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9bdlf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xpvqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.924333 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.924376 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.924393 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.924417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.924434 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:25Z","lastTransitionTime":"2026-01-05T21:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.925617 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3199cfb3-5965-4ece-879d-2f49bd4c0976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74f2b0d325af46ff6d32e4cb5ab57014827f4d48f76d6e3857d63488c64d7ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c6010f8526431f8a8504f00bcede907c42a68cd22aced177abffd58d417fee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ca8dda0b00bfaa39e860cba0b6d79f21b73f142a858c7092229692293f1b01f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe66d4be9ab0bd8b988c86688cc0ecd099fc61bc293003b5a923f1138b083f1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a9b8aa3f4159282de0408bdc2d334ee0a1966f510223fa2f8bb6e0ee6890c72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://216dd8dea43b0a2e009672fe89b9662b953f802f3243a9b0c06e494778b52496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeb82871c320564d2b86487fee5222dfe72f9f2ed7309586e2547005147badbf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62cm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ht6xh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.940869 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8968859c-a813-4486-a2b4-cf020c7b00f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://020f8df6592a02a08387a0fe9f50a9a54d9c0e661aab8f921e5a39bffb183928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb2d7aa2b6fe302f377189d5a76bc6b5b2b78ad2c2f9d89952f720f02292aff7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcf28c82ea7e5c99e63e2a89c0703830ff0aecc1132e28157e8986e6a6b4bc20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827ea19c53d2a2042ad552d52ec8483396d07f63275ca162e37af28536ebd7b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-05T21:34:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.959407 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df679ede12d44f5c5888cbd447b8109ec1c5e27973d671896bbacd6c028e42fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.970868 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:25 crc kubenswrapper[5000]: I0105 21:35:25.989384 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sd8pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10b7118-eb24-495a-bb8f-bc46a3c38799\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-05T21:35:16Z\\\",\\\"message\\\":\\\"2026-01-05T21:34:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd\\\\n2026-01-05T21:34:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1bd606df-75ca-485c-bf7b-84f2a2b224fd to /host/opt/cni/bin/\\\\n2026-01-05T21:34:31Z [verbose] multus-daemon started\\\\n2026-01-05T21:34:31Z [verbose] Readiness Indicator file check\\\\n2026-01-05T21:35:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-05T21:34:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdrqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sd8pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.000463 5000 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-px9xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70ba1bce-8373-472e-a7bf-776eba738f1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-05T21:34:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b86ab5e766ef5c929f16e682983ac7a55732c1b72d151059437c880245df3d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-05T21:34:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26ldj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-05T21:34:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-px9xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:25Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.026772 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.027012 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.027161 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.027283 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.027372 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.130173 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.130245 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.130264 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.130289 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.130305 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.233936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.234002 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.234026 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.234055 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.234079 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.337352 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.337455 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.337474 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.337501 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.337535 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.440702 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.440768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.440792 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.440815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.440840 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.544971 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.545269 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.545469 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.545641 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.545842 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.648536 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.648816 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.649004 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.649135 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.649230 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.752015 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.752153 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.752171 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.752195 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.752212 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.855707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.855766 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.855782 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.855805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.855820 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.959039 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.959135 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.959163 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.959194 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:26 crc kubenswrapper[5000]: I0105 21:35:26.959217 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:26Z","lastTransitionTime":"2026-01-05T21:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.061646 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.061736 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.061759 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.061783 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.061801 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.164746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.164787 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.164798 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.164814 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.164824 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.228776 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.228924 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:31.228862359 +0000 UTC m=+146.185064838 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.228999 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.229065 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.229149 5000 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.229191 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:36:31.229182218 +0000 UTC m=+146.185384687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.229192 5000 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.229247 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-05 21:36:31.22923506 +0000 UTC m=+146.185437549 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.267411 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.267473 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.267488 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.267509 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.267521 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.322720 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.323184 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.323461 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.323556 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.322798 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.323675 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.323820 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.324036 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.329471 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.329673 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.329613 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.329947 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.330055 5000 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.330182 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-05 21:36:31.330165165 +0000 UTC m=+146.286367644 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.329785 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.330372 5000 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.330454 5000 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.330670 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-05 21:36:31.330646639 +0000 UTC m=+146.286849118 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.334079 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.369240 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.369272 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.369280 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.369292 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.369301 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.472523 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.472592 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.472617 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.472647 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.472668 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.575200 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.575246 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.575259 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.575278 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.575288 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.649585 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.649909 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.650027 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.650119 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.650305 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.668399 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.673321 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.673414 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.673442 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.673475 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.673500 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.692751 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.697964 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.698042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.698091 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.698111 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.698124 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.716547 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.720527 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.720582 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.720600 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.720623 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.720640 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.739450 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.743127 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.743165 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.743173 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.743189 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.743199 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.756685 5000 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-05T21:35:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fe814346-f2cb-4c2c-b34c-7aac41ab93c7\\\",\\\"systemUUID\\\":\\\"57cd32f3-2b5a-4a0d-8652-c015d388936a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-05T21:35:27Z is after 2025-08-24T17:21:41Z" Jan 05 21:35:27 crc kubenswrapper[5000]: E0105 21:35:27.756811 5000 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.758340 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.758371 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.758381 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.758396 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.758405 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.861131 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.861164 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.861174 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.861187 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.861198 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.963103 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.963152 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.963167 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.963364 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:27 crc kubenswrapper[5000]: I0105 21:35:27.963379 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:27Z","lastTransitionTime":"2026-01-05T21:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.065827 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.065938 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.065956 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.065973 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.065984 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.169706 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.169819 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.169950 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.169992 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.170050 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.273746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.273809 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.273830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.273855 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.273872 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.376025 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.376058 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.376066 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.376077 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.376089 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.478666 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.478718 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.478734 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.478748 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.478757 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.581361 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.581434 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.581458 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.581488 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.581510 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.684324 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.684398 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.684424 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.684454 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.684476 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.786437 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.786486 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.786496 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.786511 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.786521 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.888913 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.888951 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.888963 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.888978 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.888987 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.991975 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.992032 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.992048 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.992073 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:28 crc kubenswrapper[5000]: I0105 21:35:28.992092 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:28Z","lastTransitionTime":"2026-01-05T21:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.095336 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.095431 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.095456 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.095485 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.095512 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.199078 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.199140 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.199157 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.199181 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.199203 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.302332 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.302391 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.302407 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.302427 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.302443 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.323189 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.323249 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.323260 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:29 crc kubenswrapper[5000]: E0105 21:35:29.323774 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.323877 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:29 crc kubenswrapper[5000]: E0105 21:35:29.323868 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:29 crc kubenswrapper[5000]: E0105 21:35:29.324060 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:29 crc kubenswrapper[5000]: E0105 21:35:29.326797 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.404936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.404982 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.404993 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.405009 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.405020 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.509667 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.509747 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.509775 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.509805 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.509828 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.612835 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.612881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.612912 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.612929 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.612941 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.715866 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.716292 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.716452 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.716633 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.716765 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.819246 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.819285 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.819293 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.819307 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.819318 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.921943 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.922013 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.922034 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.922064 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:29 crc kubenswrapper[5000]: I0105 21:35:29.922086 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:29Z","lastTransitionTime":"2026-01-05T21:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.025519 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.025566 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.025578 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.025620 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.025635 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.128581 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.128656 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.128684 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.128717 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.128739 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.232192 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.232274 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.232301 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.232339 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.232365 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.335867 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.335992 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.336021 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.336056 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.336083 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.438774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.438815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.438825 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.438840 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.438848 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.542047 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.542577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.542746 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.542907 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.543017 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.646607 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.646916 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.646999 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.647073 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.647140 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.749353 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.749425 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.749446 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.749474 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.749492 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.852770 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.852826 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.852835 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.852904 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.852915 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.956417 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.956485 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.956508 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.956540 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:30 crc kubenswrapper[5000]: I0105 21:35:30.956562 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:30Z","lastTransitionTime":"2026-01-05T21:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.059489 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.059864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.060030 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.060065 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.060087 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.162842 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.162909 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.162922 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.162939 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.162952 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.265707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.265768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.265782 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.265804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.265815 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.323367 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.323410 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.323445 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.323492 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:31 crc kubenswrapper[5000]: E0105 21:35:31.323546 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:31 crc kubenswrapper[5000]: E0105 21:35:31.323692 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:31 crc kubenswrapper[5000]: E0105 21:35:31.323783 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:31 crc kubenswrapper[5000]: E0105 21:35:31.324251 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.369524 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.369637 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.369656 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.369683 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.369707 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.472501 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.472551 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.472562 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.472583 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.472596 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.574636 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.574978 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.575113 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.575217 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.575304 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.677826 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.680112 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.680147 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.680198 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.680221 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.783015 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.783060 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.783071 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.783089 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.783100 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.886514 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.886587 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.886611 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.886643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.886668 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.990180 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.990218 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.990226 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.990240 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:31 crc kubenswrapper[5000]: I0105 21:35:31.990249 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:31Z","lastTransitionTime":"2026-01-05T21:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.092682 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.092727 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.092736 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.092749 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.092760 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.195159 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.195204 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.195214 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.195227 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.195239 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.297425 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.297460 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.297471 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.297485 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.297496 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.399813 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.399866 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.399880 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.399913 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.399925 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.502645 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.503007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.503022 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.503065 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.503078 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.605802 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.605857 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.605867 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.605882 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.605903 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.709188 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.709245 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.709263 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.709289 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.709308 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.812761 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.812814 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.812830 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.812853 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.812871 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.915610 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.915680 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.915703 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.915731 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:32 crc kubenswrapper[5000]: I0105 21:35:32.915753 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:32Z","lastTransitionTime":"2026-01-05T21:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.018279 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.018324 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.018338 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.018355 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.018365 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.120741 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.121968 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.122006 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.122026 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.122038 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.224850 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.225221 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.225367 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.225479 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.225579 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.323802 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.323914 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.324229 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.324349 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:33 crc kubenswrapper[5000]: E0105 21:35:33.324503 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:33 crc kubenswrapper[5000]: E0105 21:35:33.324414 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:33 crc kubenswrapper[5000]: E0105 21:35:33.324648 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:33 crc kubenswrapper[5000]: E0105 21:35:33.324728 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.328555 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.328593 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.328649 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.328686 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.328700 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.431085 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.431130 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.431138 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.431154 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.431166 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.533591 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.533621 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.533630 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.533643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.533652 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.636487 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.636751 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.636849 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.636963 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.637063 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.739756 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.740088 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.740301 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.740514 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.740595 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.842309 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.842363 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.842375 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.842389 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.842400 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.945346 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.945784 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.945860 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.945960 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:33 crc kubenswrapper[5000]: I0105 21:35:33.946041 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:33Z","lastTransitionTime":"2026-01-05T21:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.049177 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.049617 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.049709 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.049801 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.049879 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.153085 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.153149 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.153174 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.153205 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.153229 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.256464 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.256537 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.256551 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.256569 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.256580 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.360017 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.360087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.360111 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.360144 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.360168 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.463291 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.463523 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.463590 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.463659 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.463734 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.566481 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.566719 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.566849 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.566992 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.567097 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.669663 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.669707 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.669750 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.669775 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.669789 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.771815 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.771854 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.771864 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.771879 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.771905 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.874525 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.874588 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.874603 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.874623 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.874639 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.977194 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.977569 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.977729 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.977872 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:34 crc kubenswrapper[5000]: I0105 21:35:34.978078 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:34Z","lastTransitionTime":"2026-01-05T21:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.081126 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.081175 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.081190 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.081210 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.081225 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.184359 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.184414 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.184432 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.184455 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.184472 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.287256 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.287288 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.287295 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.287307 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.287317 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.322970 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.323215 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.323305 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:35 crc kubenswrapper[5000]: E0105 21:35:35.323471 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.323606 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:35 crc kubenswrapper[5000]: E0105 21:35:35.323720 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:35 crc kubenswrapper[5000]: E0105 21:35:35.323927 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:35 crc kubenswrapper[5000]: E0105 21:35:35.323981 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.391202 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.391657 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.391808 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.392007 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.392141 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.391680 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ckdm7" podStartSLOduration=66.391607978 podStartE2EDuration="1m6.391607978s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.390021434 +0000 UTC m=+90.346223943" watchObservedRunningTime="2026-01-05 21:35:35.391607978 +0000 UTC m=+90.347810487" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.493864 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.493850041 podStartE2EDuration="1m12.493850041s" podCreationTimestamp="2026-01-05 21:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.460798516 +0000 UTC m=+90.417001025" watchObservedRunningTime="2026-01-05 21:35:35.493850041 +0000 UTC m=+90.450052510" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.494880 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.494936 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.494952 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.494973 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.494988 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.515440 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.515418661 podStartE2EDuration="1m13.515418661s" podCreationTimestamp="2026-01-05 21:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.494128129 +0000 UTC m=+90.450330628" watchObservedRunningTime="2026-01-05 21:35:35.515418661 +0000 UTC m=+90.471621130" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.541283 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ht6xh" podStartSLOduration=66.541265792 podStartE2EDuration="1m6.541265792s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.540809619 +0000 UTC m=+90.497012098" watchObservedRunningTime="2026-01-05 21:35:35.541265792 +0000 UTC m=+90.497468271" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.541480 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podStartSLOduration=66.541475308 podStartE2EDuration="1m6.541475308s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.515989277 +0000 UTC m=+90.472191766" watchObservedRunningTime="2026-01-05 21:35:35.541475308 +0000 UTC m=+90.497677777" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.559936 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.559919550000004 podStartE2EDuration="37.55991955s" podCreationTimestamp="2026-01-05 21:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.559404435 +0000 UTC m=+90.515606914" watchObservedRunningTime="2026-01-05 21:35:35.55991955 +0000 UTC m=+90.516122019" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.596710 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.596747 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.596756 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.596769 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.596778 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.601242 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sd8pl" podStartSLOduration=66.601227209 podStartE2EDuration="1m6.601227209s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.601177997 +0000 UTC m=+90.557380466" watchObservedRunningTime="2026-01-05 21:35:35.601227209 +0000 UTC m=+90.557429688" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.610209 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-px9xc" podStartSLOduration=66.610187802 podStartE2EDuration="1m6.610187802s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.609929685 +0000 UTC m=+90.566132174" watchObservedRunningTime="2026-01-05 21:35:35.610187802 +0000 UTC m=+90.566390271" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.621867 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=8.621849512 podStartE2EDuration="8.621849512s" podCreationTimestamp="2026-01-05 21:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.621266206 +0000 UTC m=+90.577468675" watchObservedRunningTime="2026-01-05 21:35:35.621849512 +0000 UTC m=+90.578051981" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.661571 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7r7z6" podStartSLOduration=66.661549205 podStartE2EDuration="1m6.661549205s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:35.635467597 +0000 UTC m=+90.591670066" watchObservedRunningTime="2026-01-05 21:35:35.661549205 +0000 UTC m=+90.617751684" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.699184 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.699403 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.699467 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.699535 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.699591 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.802021 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.802321 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.802387 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.802453 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.802514 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.905799 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.906093 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.906173 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.906259 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:35 crc kubenswrapper[5000]: I0105 21:35:35.906340 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:35Z","lastTransitionTime":"2026-01-05T21:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.009215 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.009286 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.009298 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.009316 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.009328 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.111539 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.111608 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.111631 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.111659 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.111681 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.214087 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.214168 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.214192 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.214220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.214247 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.316881 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.316967 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.316995 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.317018 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.317037 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.420577 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.420658 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.420681 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.420712 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.420737 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.524542 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.524603 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.524620 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.524643 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.524659 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.627767 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.627829 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.627851 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.627878 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.627940 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.729852 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.729907 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.729917 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.729931 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.729939 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.833676 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.833751 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.833774 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.833803 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.833825 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.936148 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.936196 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.936208 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.936222 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:36 crc kubenswrapper[5000]: I0105 21:35:36.936233 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:36Z","lastTransitionTime":"2026-01-05T21:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.038845 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.038918 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.038935 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.038959 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.038976 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.142359 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.142414 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.142426 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.142449 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.142464 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.246584 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.246624 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.246635 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.246652 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.246663 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.323850 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.323927 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:37 crc kubenswrapper[5000]: E0105 21:35:37.324062 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.324137 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.324134 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:37 crc kubenswrapper[5000]: E0105 21:35:37.324426 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:37 crc kubenswrapper[5000]: E0105 21:35:37.324663 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:37 crc kubenswrapper[5000]: E0105 21:35:37.325066 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.343446 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.349851 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.349907 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.349919 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.349935 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.349948 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.451201 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.451231 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.451273 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.451287 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.451296 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.553724 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.553768 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.553780 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.553796 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.553808 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.655725 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.655779 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.655790 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.655804 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.655813 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.758938 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.759019 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.759042 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.759069 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.759089 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.822120 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.822180 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.822197 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.822220 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.822236 5000 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-05T21:35:37Z","lastTransitionTime":"2026-01-05T21:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.872193 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f"] Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.872685 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.874738 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.874954 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.877393 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.878606 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 21:35:37 crc kubenswrapper[5000]: I0105 21:35:37.909618 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.909595663 podStartE2EDuration="909.595663ms" podCreationTimestamp="2026-01-05 21:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:37.907772212 +0000 UTC m=+92.863974691" watchObservedRunningTime="2026-01-05 21:35:37.909595663 +0000 UTC m=+92.865798142" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.056290 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.056402 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1379d72d-2a28-4ff2-81c7-2a2495d821fc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.056599 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1379d72d-2a28-4ff2-81c7-2a2495d821fc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.056684 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.056765 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1379d72d-2a28-4ff2-81c7-2a2495d821fc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.157946 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1379d72d-2a28-4ff2-81c7-2a2495d821fc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158001 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158036 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1379d72d-2a28-4ff2-81c7-2a2495d821fc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158069 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158095 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1379d72d-2a28-4ff2-81c7-2a2495d821fc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158119 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.158178 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1379d72d-2a28-4ff2-81c7-2a2495d821fc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.159120 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1379d72d-2a28-4ff2-81c7-2a2495d821fc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.163491 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1379d72d-2a28-4ff2-81c7-2a2495d821fc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.182414 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1379d72d-2a28-4ff2-81c7-2a2495d821fc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-82b6f\" (UID: \"1379d72d-2a28-4ff2-81c7-2a2495d821fc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.191945 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" Jan 05 21:35:38 crc kubenswrapper[5000]: W0105 21:35:38.206916 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1379d72d_2a28_4ff2_81c7_2a2495d821fc.slice/crio-a8cd38977fa2a6d2cc1f762d06f5d8915f998ec48ad60c6a6b1bb689ecb50650 WatchSource:0}: Error finding container a8cd38977fa2a6d2cc1f762d06f5d8915f998ec48ad60c6a6b1bb689ecb50650: Status 404 returned error can't find the container with id a8cd38977fa2a6d2cc1f762d06f5d8915f998ec48ad60c6a6b1bb689ecb50650 Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.775619 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" event={"ID":"1379d72d-2a28-4ff2-81c7-2a2495d821fc","Type":"ContainerStarted","Data":"015bb00f044988a089a4eb17c1c90d0013e1befecb075b6d3a72aa5b6ce4011a"} Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.775676 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" event={"ID":"1379d72d-2a28-4ff2-81c7-2a2495d821fc","Type":"ContainerStarted","Data":"a8cd38977fa2a6d2cc1f762d06f5d8915f998ec48ad60c6a6b1bb689ecb50650"} Jan 05 21:35:38 crc kubenswrapper[5000]: I0105 21:35:38.795580 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-82b6f" podStartSLOduration=69.795562408 podStartE2EDuration="1m9.795562408s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:35:38.79526757 +0000 UTC m=+93.751470039" watchObservedRunningTime="2026-01-05 21:35:38.795562408 +0000 UTC m=+93.751764877" Jan 05 21:35:39 crc kubenswrapper[5000]: I0105 21:35:39.326097 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:39 crc kubenswrapper[5000]: E0105 21:35:39.326218 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:39 crc kubenswrapper[5000]: I0105 21:35:39.326273 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:39 crc kubenswrapper[5000]: I0105 21:35:39.326304 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:39 crc kubenswrapper[5000]: E0105 21:35:39.326358 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:39 crc kubenswrapper[5000]: E0105 21:35:39.326439 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:39 crc kubenswrapper[5000]: I0105 21:35:39.326304 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:39 crc kubenswrapper[5000]: E0105 21:35:39.326532 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:40 crc kubenswrapper[5000]: I0105 21:35:40.323528 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:35:40 crc kubenswrapper[5000]: E0105 21:35:40.323709 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:35:41 crc kubenswrapper[5000]: I0105 21:35:41.323441 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:41 crc kubenswrapper[5000]: E0105 21:35:41.323792 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:41 crc kubenswrapper[5000]: I0105 21:35:41.323451 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:41 crc kubenswrapper[5000]: E0105 21:35:41.323881 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:41 crc kubenswrapper[5000]: I0105 21:35:41.323476 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:41 crc kubenswrapper[5000]: I0105 21:35:41.323438 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:41 crc kubenswrapper[5000]: E0105 21:35:41.323970 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:41 crc kubenswrapper[5000]: E0105 21:35:41.324153 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:43 crc kubenswrapper[5000]: I0105 21:35:43.323227 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:43 crc kubenswrapper[5000]: E0105 21:35:43.323774 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:43 crc kubenswrapper[5000]: I0105 21:35:43.323329 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:43 crc kubenswrapper[5000]: E0105 21:35:43.324105 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:43 crc kubenswrapper[5000]: I0105 21:35:43.323294 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:43 crc kubenswrapper[5000]: E0105 21:35:43.324327 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:43 crc kubenswrapper[5000]: I0105 21:35:43.323353 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:43 crc kubenswrapper[5000]: E0105 21:35:43.324512 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:45 crc kubenswrapper[5000]: I0105 21:35:45.323685 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:45 crc kubenswrapper[5000]: I0105 21:35:45.323753 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:45 crc kubenswrapper[5000]: E0105 21:35:45.325161 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:45 crc kubenswrapper[5000]: I0105 21:35:45.325182 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:45 crc kubenswrapper[5000]: I0105 21:35:45.325209 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:45 crc kubenswrapper[5000]: E0105 21:35:45.325349 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:45 crc kubenswrapper[5000]: E0105 21:35:45.325396 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:45 crc kubenswrapper[5000]: E0105 21:35:45.325438 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:47 crc kubenswrapper[5000]: I0105 21:35:47.322834 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:47 crc kubenswrapper[5000]: I0105 21:35:47.322834 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:47 crc kubenswrapper[5000]: I0105 21:35:47.322933 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.323465 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.323493 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.323251 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:47 crc kubenswrapper[5000]: I0105 21:35:47.322935 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.323606 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:47 crc kubenswrapper[5000]: I0105 21:35:47.553712 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.553834 5000 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:35:47 crc kubenswrapper[5000]: E0105 21:35:47.553906 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs podName:b3a4c991-8f85-4923-afb4-8cc78ceeaed8 nodeName:}" failed. No retries permitted until 2026-01-05 21:36:51.553872006 +0000 UTC m=+166.510074475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs") pod "network-metrics-daemon-gpwcw" (UID: "b3a4c991-8f85-4923-afb4-8cc78ceeaed8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 05 21:35:49 crc kubenswrapper[5000]: I0105 21:35:49.323184 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:49 crc kubenswrapper[5000]: I0105 21:35:49.323195 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:49 crc kubenswrapper[5000]: I0105 21:35:49.323237 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:49 crc kubenswrapper[5000]: I0105 21:35:49.323195 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:49 crc kubenswrapper[5000]: E0105 21:35:49.323588 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:49 crc kubenswrapper[5000]: E0105 21:35:49.323772 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:49 crc kubenswrapper[5000]: E0105 21:35:49.323821 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:49 crc kubenswrapper[5000]: E0105 21:35:49.323863 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:51 crc kubenswrapper[5000]: I0105 21:35:51.322754 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:51 crc kubenswrapper[5000]: I0105 21:35:51.322877 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:51 crc kubenswrapper[5000]: I0105 21:35:51.323038 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:51 crc kubenswrapper[5000]: I0105 21:35:51.323089 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:51 crc kubenswrapper[5000]: E0105 21:35:51.323317 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:51 crc kubenswrapper[5000]: E0105 21:35:51.323395 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:51 crc kubenswrapper[5000]: E0105 21:35:51.323512 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:51 crc kubenswrapper[5000]: E0105 21:35:51.323640 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:53 crc kubenswrapper[5000]: I0105 21:35:53.323918 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:53 crc kubenswrapper[5000]: I0105 21:35:53.324006 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:53 crc kubenswrapper[5000]: I0105 21:35:53.324045 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:53 crc kubenswrapper[5000]: I0105 21:35:53.323942 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:53 crc kubenswrapper[5000]: E0105 21:35:53.324239 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:53 crc kubenswrapper[5000]: E0105 21:35:53.324348 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:53 crc kubenswrapper[5000]: E0105 21:35:53.324452 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:53 crc kubenswrapper[5000]: E0105 21:35:53.324599 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:55 crc kubenswrapper[5000]: I0105 21:35:55.322984 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:55 crc kubenswrapper[5000]: I0105 21:35:55.323014 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:55 crc kubenswrapper[5000]: I0105 21:35:55.322983 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:55 crc kubenswrapper[5000]: I0105 21:35:55.330116 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:55 crc kubenswrapper[5000]: E0105 21:35:55.330112 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:55 crc kubenswrapper[5000]: E0105 21:35:55.330269 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:55 crc kubenswrapper[5000]: E0105 21:35:55.330721 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:55 crc kubenswrapper[5000]: I0105 21:35:55.330879 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:35:55 crc kubenswrapper[5000]: E0105 21:35:55.330950 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:55 crc kubenswrapper[5000]: E0105 21:35:55.331037 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f5k4c_openshift-ovn-kubernetes(a1406b03-70e6-4874-8cfe-5991e43cc720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" Jan 05 21:35:57 crc kubenswrapper[5000]: I0105 21:35:57.323364 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:57 crc kubenswrapper[5000]: E0105 21:35:57.323499 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:57 crc kubenswrapper[5000]: I0105 21:35:57.323527 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:57 crc kubenswrapper[5000]: I0105 21:35:57.323545 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:57 crc kubenswrapper[5000]: I0105 21:35:57.323626 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:57 crc kubenswrapper[5000]: E0105 21:35:57.323715 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:57 crc kubenswrapper[5000]: E0105 21:35:57.323774 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:57 crc kubenswrapper[5000]: E0105 21:35:57.324037 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:35:59 crc kubenswrapper[5000]: I0105 21:35:59.323681 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:35:59 crc kubenswrapper[5000]: I0105 21:35:59.324293 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:35:59 crc kubenswrapper[5000]: E0105 21:35:59.324407 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:35:59 crc kubenswrapper[5000]: I0105 21:35:59.324504 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:35:59 crc kubenswrapper[5000]: I0105 21:35:59.324578 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:35:59 crc kubenswrapper[5000]: E0105 21:35:59.324731 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:35:59 crc kubenswrapper[5000]: E0105 21:35:59.324879 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:35:59 crc kubenswrapper[5000]: E0105 21:35:59.324983 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:01 crc kubenswrapper[5000]: I0105 21:36:01.323765 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:01 crc kubenswrapper[5000]: I0105 21:36:01.323811 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:01 crc kubenswrapper[5000]: I0105 21:36:01.323776 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:01 crc kubenswrapper[5000]: I0105 21:36:01.323765 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:01 crc kubenswrapper[5000]: E0105 21:36:01.323912 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:01 crc kubenswrapper[5000]: E0105 21:36:01.323987 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:01 crc kubenswrapper[5000]: E0105 21:36:01.324049 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:01 crc kubenswrapper[5000]: E0105 21:36:01.324104 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.852972 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/1.log" Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.853366 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/0.log" Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.853399 5000 generic.go:334] "Generic (PLEG): container finished" podID="c10b7118-eb24-495a-bb8f-bc46a3c38799" containerID="d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0" exitCode=1 Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.853423 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerDied","Data":"d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0"} Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.853452 5000 scope.go:117] "RemoveContainer" containerID="0242384cf90a5df89991e111927da1e83fbf03c5198da091ce51a8720563dfa7" Jan 05 21:36:02 crc kubenswrapper[5000]: I0105 21:36:02.854030 5000 scope.go:117] "RemoveContainer" containerID="d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0" Jan 05 21:36:02 crc kubenswrapper[5000]: E0105 21:36:02.854184 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sd8pl_openshift-multus(c10b7118-eb24-495a-bb8f-bc46a3c38799)\"" pod="openshift-multus/multus-sd8pl" podUID="c10b7118-eb24-495a-bb8f-bc46a3c38799" Jan 05 21:36:03 crc kubenswrapper[5000]: I0105 21:36:03.323561 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:03 crc kubenswrapper[5000]: I0105 21:36:03.323606 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:03 crc kubenswrapper[5000]: I0105 21:36:03.323566 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:03 crc kubenswrapper[5000]: E0105 21:36:03.323692 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:03 crc kubenswrapper[5000]: I0105 21:36:03.323780 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:03 crc kubenswrapper[5000]: E0105 21:36:03.323878 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:03 crc kubenswrapper[5000]: E0105 21:36:03.324077 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:03 crc kubenswrapper[5000]: E0105 21:36:03.324159 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:03 crc kubenswrapper[5000]: I0105 21:36:03.858017 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/1.log" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.288448 5000 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 05 21:36:05 crc kubenswrapper[5000]: I0105 21:36:05.324657 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.324736 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:05 crc kubenswrapper[5000]: I0105 21:36:05.324962 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.325031 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:05 crc kubenswrapper[5000]: I0105 21:36:05.325146 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.325190 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:05 crc kubenswrapper[5000]: I0105 21:36:05.325371 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.325434 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:05 crc kubenswrapper[5000]: E0105 21:36:05.434499 5000 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 21:36:07 crc kubenswrapper[5000]: I0105 21:36:07.323164 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:07 crc kubenswrapper[5000]: I0105 21:36:07.323231 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:07 crc kubenswrapper[5000]: I0105 21:36:07.323255 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:07 crc kubenswrapper[5000]: I0105 21:36:07.323282 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:07 crc kubenswrapper[5000]: E0105 21:36:07.323317 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:07 crc kubenswrapper[5000]: E0105 21:36:07.323393 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:07 crc kubenswrapper[5000]: E0105 21:36:07.323480 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:07 crc kubenswrapper[5000]: E0105 21:36:07.323554 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:09 crc kubenswrapper[5000]: I0105 21:36:09.322792 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:09 crc kubenswrapper[5000]: E0105 21:36:09.323067 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:09 crc kubenswrapper[5000]: I0105 21:36:09.323166 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:09 crc kubenswrapper[5000]: E0105 21:36:09.323294 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:09 crc kubenswrapper[5000]: I0105 21:36:09.323365 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:09 crc kubenswrapper[5000]: I0105 21:36:09.323365 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:09 crc kubenswrapper[5000]: E0105 21:36:09.323545 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:09 crc kubenswrapper[5000]: E0105 21:36:09.323663 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:10 crc kubenswrapper[5000]: I0105 21:36:10.323734 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:36:10 crc kubenswrapper[5000]: E0105 21:36:10.436275 5000 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 21:36:10 crc kubenswrapper[5000]: I0105 21:36:10.877765 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/3.log" Jan 05 21:36:10 crc kubenswrapper[5000]: I0105 21:36:10.879640 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerStarted","Data":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} Jan 05 21:36:10 crc kubenswrapper[5000]: I0105 21:36:10.880672 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.323191 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.323235 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.323201 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.323191 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:11 crc kubenswrapper[5000]: E0105 21:36:11.323408 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:11 crc kubenswrapper[5000]: E0105 21:36:11.323323 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:11 crc kubenswrapper[5000]: E0105 21:36:11.323485 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:11 crc kubenswrapper[5000]: E0105 21:36:11.323536 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.333126 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podStartSLOduration=102.333105918 podStartE2EDuration="1m42.333105918s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:10.909072642 +0000 UTC m=+125.865275121" watchObservedRunningTime="2026-01-05 21:36:11.333105918 +0000 UTC m=+126.289308387" Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.334309 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gpwcw"] Jan 05 21:36:11 crc kubenswrapper[5000]: I0105 21:36:11.882836 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:11 crc kubenswrapper[5000]: E0105 21:36:11.882994 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.323042 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.323042 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.323199 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.323192 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:13 crc kubenswrapper[5000]: E0105 21:36:13.323279 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:13 crc kubenswrapper[5000]: E0105 21:36:13.323353 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:13 crc kubenswrapper[5000]: E0105 21:36:13.323423 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.323488 5000 scope.go:117] "RemoveContainer" containerID="d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0" Jan 05 21:36:13 crc kubenswrapper[5000]: E0105 21:36:13.323494 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.890768 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/1.log" Jan 05 21:36:13 crc kubenswrapper[5000]: I0105 21:36:13.891118 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerStarted","Data":"56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f"} Jan 05 21:36:15 crc kubenswrapper[5000]: I0105 21:36:15.323295 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:15 crc kubenswrapper[5000]: I0105 21:36:15.323344 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:15 crc kubenswrapper[5000]: I0105 21:36:15.323295 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:15 crc kubenswrapper[5000]: E0105 21:36:15.325057 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:15 crc kubenswrapper[5000]: I0105 21:36:15.325090 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:15 crc kubenswrapper[5000]: E0105 21:36:15.325166 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:15 crc kubenswrapper[5000]: E0105 21:36:15.325185 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:15 crc kubenswrapper[5000]: E0105 21:36:15.325264 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:15 crc kubenswrapper[5000]: E0105 21:36:15.436965 5000 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 05 21:36:17 crc kubenswrapper[5000]: I0105 21:36:17.322783 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:17 crc kubenswrapper[5000]: I0105 21:36:17.322872 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:17 crc kubenswrapper[5000]: I0105 21:36:17.322943 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:17 crc kubenswrapper[5000]: E0105 21:36:17.322969 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:17 crc kubenswrapper[5000]: I0105 21:36:17.322980 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:17 crc kubenswrapper[5000]: E0105 21:36:17.323083 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:17 crc kubenswrapper[5000]: E0105 21:36:17.323175 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:17 crc kubenswrapper[5000]: E0105 21:36:17.323280 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:19 crc kubenswrapper[5000]: I0105 21:36:19.323883 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:19 crc kubenswrapper[5000]: I0105 21:36:19.324092 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:19 crc kubenswrapper[5000]: I0105 21:36:19.324130 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:19 crc kubenswrapper[5000]: I0105 21:36:19.324131 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:19 crc kubenswrapper[5000]: E0105 21:36:19.324487 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 05 21:36:19 crc kubenswrapper[5000]: E0105 21:36:19.324623 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 05 21:36:19 crc kubenswrapper[5000]: E0105 21:36:19.324794 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gpwcw" podUID="b3a4c991-8f85-4923-afb4-8cc78ceeaed8" Jan 05 21:36:19 crc kubenswrapper[5000]: E0105 21:36:19.324945 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.323858 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.324213 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.324343 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.324770 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.326201 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.327184 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.327393 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.327581 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.328774 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.329289 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 21:36:21 crc kubenswrapper[5000]: I0105 21:36:21.374661 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.794671 5000 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.836413 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cfzn2"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.837065 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.837321 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.837764 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.838290 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.838778 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.842874 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.846534 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.846819 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847123 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847444 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fv7st"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847609 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847660 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847746 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.847988 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848190 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848366 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848471 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848704 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7djbs"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848715 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.848955 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849049 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849054 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849196 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849226 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849197 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849307 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849339 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849365 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.849516 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.854993 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.856115 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.859000 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.860042 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.863717 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.864074 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.864289 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.864558 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqnqf"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.865641 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.866007 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.866181 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.866525 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.866655 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.866681 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.867081 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.867140 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.867472 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.867580 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.868205 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.871798 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.872619 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.875820 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.876676 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.878018 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.882535 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tf7rj"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.882603 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.882779 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.882976 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.883050 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.883169 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.899393 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.899554 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.925946 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926079 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926148 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926216 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926282 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926352 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926538 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926689 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926766 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.926934 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927058 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927174 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927239 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927283 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927337 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927428 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927540 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.927655 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.928474 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.928542 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.928473 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.928851 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929035 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929169 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929267 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929319 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929480 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929590 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929981 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.930116 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.930233 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929630 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929672 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.930830 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931003 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931208 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931329 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931482 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931734 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931839 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931880 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932067 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932270 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932358 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932380 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fskst"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932550 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932922 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.933000 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932391 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.931844 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932435 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.929173 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932487 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932614 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.932665 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.936565 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.936756 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.937145 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.944174 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946554 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.938849 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946586 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dplr4\" (UniqueName: \"kubernetes.io/projected/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-kube-api-access-dplr4\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946622 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8lrv\" (UniqueName: \"kubernetes.io/projected/b9490f60-a23b-4f00-baaf-c981be5e60cb-kube-api-access-c8lrv\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946641 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946666 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c433d9-cdda-4a3b-b82c-78e23f9d790b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946703 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-encryption-config\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946721 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/7422b464-53bc-4f4a-8734-bb9f8d5ca846-kube-api-access-qk8vg\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946739 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.939148 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946756 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c183ccbe-bb04-4614-9f26-11266d34255b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946777 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-images\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946793 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946822 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.939288 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946838 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncg6s\" (UniqueName: \"kubernetes.io/projected/5818841d-889c-49f1-96fc-efa5064f48b7-kube-api-access-ncg6s\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946858 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67e26059-23ca-4086-bc5a-f935a4c403ca-machine-approver-tls\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946877 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.939424 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946556 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hkznp"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.946914 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947029 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947086 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-config\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947108 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c183ccbe-bb04-4614-9f26-11266d34255b-config\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947162 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947186 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947208 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txpmz\" (UniqueName: \"kubernetes.io/projected/89c433d9-cdda-4a3b-b82c-78e23f9d790b-kube-api-access-txpmz\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947231 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-client\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947256 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947281 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947304 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947326 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79nc\" (UniqueName: \"kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947351 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947384 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947407 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-client\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947432 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947454 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-auth-proxy-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947479 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7422b464-53bc-4f4a-8734-bb9f8d5ca846-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947503 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947526 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947554 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-client\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947577 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-image-import-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947602 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-serving-cert\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947626 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947652 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947678 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5ft\" (UniqueName: \"kubernetes.io/projected/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-kube-api-access-wv5ft\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947701 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-serving-cert\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947727 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947754 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947788 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.939558 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947815 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-config\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947844 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkpb\" (UniqueName: \"kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.947866 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.959742 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.959747 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.960814 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.960028 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.959809 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962213 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64r7\" (UniqueName: \"kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962258 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-service-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962290 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962318 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962347 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962372 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-dir\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962388 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-audit\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962409 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-audit-dir\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962426 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwlwz\" (UniqueName: \"kubernetes.io/projected/125d3243-1198-4f7d-8930-d1890b5def2a-kube-api-access-hwlwz\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962449 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7422b464-53bc-4f4a-8734-bb9f8d5ca846-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962478 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962504 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962115 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962549 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096d4722-b423-4819-a8fb-61556963fd3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962566 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtf9r\" (UniqueName: \"kubernetes.io/projected/67e26059-23ca-4086-bc5a-f935a4c403ca-kube-api-access-wtf9r\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962586 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9490f60-a23b-4f00-baaf-c981be5e60cb-serving-cert\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962604 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-config\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962619 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962646 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962662 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962689 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-policies\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962706 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-encryption-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962723 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962745 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstk5\" (UniqueName: \"kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962765 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962782 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962798 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962816 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-node-pullsecrets\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962831 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-serving-cert\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962849 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962948 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86897\" (UniqueName: \"kubernetes.io/projected/56c32f18-c8bd-409c-9501-164a49a93dcf-kube-api-access-86897\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.962982 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c183ccbe-bb04-4614-9f26-11266d34255b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.963012 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.963031 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.963048 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-config\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.963068 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrnd\" (UniqueName: \"kubernetes.io/projected/2245d315-61bc-4b08-8e67-ffb6f2b84674-kube-api-access-dcrnd\") pod \"downloads-7954f5f757-tf7rj\" (UID: \"2245d315-61bc-4b08-8e67-ffb6f2b84674\") " pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.963095 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnnjp\" (UniqueName: \"kubernetes.io/projected/096d4722-b423-4819-a8fb-61556963fd3a-kube-api-access-jnnjp\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.968132 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.968342 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.968453 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.973725 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.974628 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.974865 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.975025 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.975228 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.978845 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pp4rh"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.979205 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.979381 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.979728 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mlm7"] Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.980454 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.984505 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.982575 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.983047 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.984753 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 21:36:28 crc kubenswrapper[5000]: I0105 21:36:28.997050 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.031469 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.041139 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.041653 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.041135 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.042455 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.043598 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.044180 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.044815 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.045293 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.046099 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.046134 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.046215 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.046875 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.047497 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.047852 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.048111 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.048161 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.048309 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.049263 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.055099 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.056963 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.057234 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.057981 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z86kg"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.058126 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.058437 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.059038 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mggdq"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.059420 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cfzn2"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.059446 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.059938 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.060659 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.060192 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.060146 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.060219 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.061329 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.062349 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070138 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-service-ca-bundle\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070218 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070245 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-node-pullsecrets\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070268 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070292 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070312 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-key\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070334 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c183ccbe-bb04-4614-9f26-11266d34255b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070352 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgcp\" (UniqueName: \"kubernetes.io/projected/e757274f-5ba4-4aff-89ab-cb6887e52ad7-kube-api-access-8xgcp\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070370 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070387 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrnd\" (UniqueName: \"kubernetes.io/projected/2245d315-61bc-4b08-8e67-ffb6f2b84674-kube-api-access-dcrnd\") pod \"downloads-7954f5f757-tf7rj\" (UID: \"2245d315-61bc-4b08-8e67-ffb6f2b84674\") " pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070402 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070422 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dplr4\" (UniqueName: \"kubernetes.io/projected/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-kube-api-access-dplr4\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070440 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070458 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c433d9-cdda-4a3b-b82c-78e23f9d790b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070482 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/7422b464-53bc-4f4a-8734-bb9f8d5ca846-kube-api-access-qk8vg\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070498 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070520 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-images\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070546 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070565 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00165d41-af6c-406d-a288-ab9be66824b8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070559 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-node-pullsecrets\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070585 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070648 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070881 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-client\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.070927 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.071986 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-images\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072107 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072690 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072749 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fv7st"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072766 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqnqf"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072778 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072792 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7djbs"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072814 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072825 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.072836 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073010 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073077 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073378 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073497 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79nc\" (UniqueName: \"kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073706 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-proxy-tls\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073828 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-stats-auth\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.073971 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074087 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-serving-cert\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074198 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074309 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-client\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074427 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5ft\" (UniqueName: \"kubernetes.io/projected/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-kube-api-access-wv5ft\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074541 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-serving-cert\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074640 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074744 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.074852 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-config\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075212 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkpb\" (UniqueName: \"kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075323 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075430 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-service-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075533 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075634 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075737 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075828 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-dir\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078222 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-audit\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078270 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-audit-dir\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078315 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwlwz\" (UniqueName: \"kubernetes.io/projected/125d3243-1198-4f7d-8930-d1890b5def2a-kube-api-access-hwlwz\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078350 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-trusted-ca\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078377 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-tmpfs\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078430 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e757274f-5ba4-4aff-89ab-cb6887e52ad7-serving-cert\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078460 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g726\" (UniqueName: \"kubernetes.io/projected/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-kube-api-access-5g726\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078485 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-metrics-tls\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078517 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096d4722-b423-4819-a8fb-61556963fd3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078544 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9490f60-a23b-4f00-baaf-c981be5e60cb-serving-cert\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078572 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-config\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078599 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078653 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078683 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078706 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078733 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c5p2\" (UniqueName: \"kubernetes.io/projected/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-kube-api-access-2c5p2\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078758 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-default-certificate\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078784 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078811 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-serving-cert\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078834 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-images\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078845 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-serving-cert\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078859 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86897\" (UniqueName: \"kubernetes.io/projected/56c32f18-c8bd-409c-9501-164a49a93dcf-kube-api-access-86897\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078909 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078946 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2edc99da-c399-450d-b55e-ac0c5ebe16af-proxy-tls\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078976 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-config\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079003 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnnjp\" (UniqueName: \"kubernetes.io/projected/096d4722-b423-4819-a8fb-61556963fd3a-kube-api-access-jnnjp\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.076800 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-client\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079093 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079124 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-metrics-certs\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079153 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8lrv\" (UniqueName: \"kubernetes.io/projected/b9490f60-a23b-4f00-baaf-c981be5e60cb-kube-api-access-c8lrv\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079180 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079242 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-encryption-config\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079264 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c183ccbe-bb04-4614-9f26-11266d34255b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079285 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079307 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-webhook-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079342 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079365 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncg6s\" (UniqueName: \"kubernetes.io/projected/5818841d-889c-49f1-96fc-efa5064f48b7-kube-api-access-ncg6s\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079389 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67e26059-23ca-4086-bc5a-f935a4c403ca-machine-approver-tls\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079414 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079439 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvvm\" (UniqueName: \"kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079471 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079490 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68k6l\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-kube-api-access-68k6l\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079509 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-config\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079529 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c183ccbe-bb04-4614-9f26-11266d34255b-config\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079547 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-apiservice-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079567 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079585 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txpmz\" (UniqueName: \"kubernetes.io/projected/89c433d9-cdda-4a3b-b82c-78e23f9d790b-kube-api-access-txpmz\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079591 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-audit\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079603 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.077328 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096d4722-b423-4819-a8fb-61556963fd3a-config\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079619 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-trusted-ca\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079657 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr6tm\" (UniqueName: \"kubernetes.io/projected/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-kube-api-access-gr6tm\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079687 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079713 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079744 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079770 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079795 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-client\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079819 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079844 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-auth-proxy-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079869 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7422b464-53bc-4f4a-8734-bb9f8d5ca846-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079924 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079952 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079975 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-cabundle\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.079999 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-image-import-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080023 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwh5\" (UniqueName: \"kubernetes.io/projected/00165d41-af6c-406d-a288-ab9be66824b8-kube-api-access-mbwh5\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080048 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080070 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080094 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64r7\" (UniqueName: \"kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080118 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdgm\" (UniqueName: \"kubernetes.io/projected/2edc99da-c399-450d-b55e-ac0c5ebe16af-kube-api-access-cfdgm\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080148 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7422b464-53bc-4f4a-8734-bb9f8d5ca846-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080171 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080194 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-config\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080214 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cm7q\" (UniqueName: \"kubernetes.io/projected/ca23e911-0c80-44ac-a1a4-ce0b242675f7-kube-api-access-5cm7q\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080256 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtf9r\" (UniqueName: \"kubernetes.io/projected/67e26059-23ca-4086-bc5a-f935a4c403ca-kube-api-access-wtf9r\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080280 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080305 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080328 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-policies\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080350 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-encryption-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080374 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstk5\" (UniqueName: \"kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080557 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/125d3243-1198-4f7d-8930-d1890b5def2a-audit-dir\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.075031 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.077745 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-dir\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.077770 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tf7rj"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080705 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.080723 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.081454 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.081579 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.081912 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.082477 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jfrdb"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.082920 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.078170 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.082994 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-etcd-client\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.083537 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-serving-cert\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.084279 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.085168 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.085242 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.086148 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.087332 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.087292 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-config\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.088176 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7422b464-53bc-4f4a-8734-bb9f8d5ca846-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.088830 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-etcd-serving-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.089351 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.089461 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9490f60-a23b-4f00-baaf-c981be5e60cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.090097 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c32f18-c8bd-409c-9501-164a49a93dcf-encryption-config\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.090435 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.091697 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.091777 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c183ccbe-bb04-4614-9f26-11266d34255b-config\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.091941 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c183ccbe-bb04-4614-9f26-11266d34255b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.091992 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wsljz"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.092485 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.092798 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.093613 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7422b464-53bc-4f4a-8734-bb9f8d5ca846-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.093760 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.094785 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-auth-proxy-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.095080 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.095559 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.095699 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.095822 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096215 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096225 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096d4722-b423-4819-a8fb-61556963fd3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096469 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/125d3243-1198-4f7d-8930-d1890b5def2a-image-import-ca\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096532 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096623 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/89c433d9-cdda-4a3b-b82c-78e23f9d790b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096733 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096781 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-service-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.096952 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.097605 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56c32f18-c8bd-409c-9501-164a49a93dcf-audit-policies\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.097676 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.097774 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.097878 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.098032 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-serving-cert\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.098203 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.099165 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.099657 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9490f60-a23b-4f00-baaf-c981be5e60cb-serving-cert\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.105556 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-config\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.105831 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e26059-23ca-4086-bc5a-f935a4c403ca-config\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.106160 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.106214 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/125d3243-1198-4f7d-8930-d1890b5def2a-encryption-config\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.106599 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-ca\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.108418 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.108436 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.108485 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.109113 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-config\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.109189 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67e26059-23ca-4086-bc5a-f935a4c403ca-machine-approver-tls\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.109868 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5818841d-889c-49f1-96fc-efa5064f48b7-etcd-client\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.109939 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.119205 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.119224 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.110499 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.120049 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.121000 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.121455 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.122061 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.122067 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.130516 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hkznp"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.131965 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.135166 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.135201 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.138218 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.141981 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.143676 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jfrdb"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.144968 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.147127 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.148527 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.149653 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.150905 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mggdq"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.152122 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pp4rh"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.153962 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nt45v"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.155043 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.155429 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.156885 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.158106 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.159328 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.160313 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.160370 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.161436 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nt45v"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.162631 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mlm7"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.164352 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.165575 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z86kg"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.166755 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vkcb"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.168296 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.168325 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vkcb"] Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181271 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-trusted-ca\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181321 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-tmpfs\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181348 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e757274f-5ba4-4aff-89ab-cb6887e52ad7-serving-cert\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181372 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g726\" (UniqueName: \"kubernetes.io/projected/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-kube-api-access-5g726\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181394 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-metrics-tls\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181420 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181458 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c5p2\" (UniqueName: \"kubernetes.io/projected/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-kube-api-access-2c5p2\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181484 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-default-certificate\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181506 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-images\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181542 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2edc99da-c399-450d-b55e-ac0c5ebe16af-proxy-tls\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181579 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-metrics-certs\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181614 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181694 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-webhook-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181732 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvvm\" (UniqueName: \"kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181942 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68k6l\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-kube-api-access-68k6l\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.181974 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182001 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-apiservice-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182036 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-trusted-ca\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182058 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr6tm\" (UniqueName: \"kubernetes.io/projected/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-kube-api-access-gr6tm\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-cabundle\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182117 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwh5\" (UniqueName: \"kubernetes.io/projected/00165d41-af6c-406d-a288-ab9be66824b8-kube-api-access-mbwh5\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182140 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfdgm\" (UniqueName: \"kubernetes.io/projected/2edc99da-c399-450d-b55e-ac0c5ebe16af-kube-api-access-cfdgm\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182184 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-config\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182207 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cm7q\" (UniqueName: \"kubernetes.io/projected/ca23e911-0c80-44ac-a1a4-ce0b242675f7-kube-api-access-5cm7q\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182263 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-service-ca-bundle\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182287 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182308 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-key\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182326 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xgcp\" (UniqueName: \"kubernetes.io/projected/e757274f-5ba4-4aff-89ab-cb6887e52ad7-kube-api-access-8xgcp\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182363 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00165d41-af6c-406d-a288-ab9be66824b8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182366 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-tmpfs\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182387 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182483 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-proxy-tls\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182551 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-stats-auth\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.182584 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.183416 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.185142 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-metrics-tls\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.188342 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.192379 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-trusted-ca\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.200644 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.219955 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.241242 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.245802 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-default-certificate\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.260968 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.280365 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.300470 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.304630 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-metrics-certs\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.321100 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.324410 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-service-ca-bundle\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.340222 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.345543 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-stats-auth\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.361065 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.380505 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.382541 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2edc99da-c399-450d-b55e-ac0c5ebe16af-images\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.400470 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.404875 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2edc99da-c399-450d-b55e-ac0c5ebe16af-proxy-tls\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.420790 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.440455 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.445294 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-proxy-tls\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.460327 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.480812 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.485365 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/00165d41-af6c-406d-a288-ab9be66824b8-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.499920 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.520422 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.541115 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.560997 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.565598 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.586393 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.593862 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.600829 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.621371 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.641282 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.643203 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-config\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.660665 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.681322 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.684707 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e757274f-5ba4-4aff-89ab-cb6887e52ad7-serving-cert\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.706651 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.714675 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e757274f-5ba4-4aff-89ab-cb6887e52ad7-trusted-ca\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.720272 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.740768 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.760261 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.765173 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-apiservice-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.765279 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-webhook-cert\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.781324 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.801012 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.820220 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.840318 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.860820 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.866291 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-key\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.880281 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.900440 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.903193 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ca23e911-0c80-44ac-a1a4-ce0b242675f7-signing-cabundle\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.946562 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.961677 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 21:36:29 crc kubenswrapper[5000]: I0105 21:36:29.981249 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.001025 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.020826 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.042190 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.058617 5000 request.go:700] Waited for 1.010087165s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0 Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.060392 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.082456 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.101738 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.121068 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.140805 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.162240 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.181360 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.200966 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.219923 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.261221 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.280802 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.300063 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.321004 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.341166 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.360112 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.385714 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.401079 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.420589 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.440916 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.461276 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.480515 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.501132 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.521636 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.541618 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.561797 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.581957 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.601329 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.621104 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.657351 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrnd\" (UniqueName: \"kubernetes.io/projected/2245d315-61bc-4b08-8e67-ffb6f2b84674-kube-api-access-dcrnd\") pod \"downloads-7954f5f757-tf7rj\" (UID: \"2245d315-61bc-4b08-8e67-ffb6f2b84674\") " pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.675284 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8vg\" (UniqueName: \"kubernetes.io/projected/7422b464-53bc-4f4a-8734-bb9f8d5ca846-kube-api-access-qk8vg\") pod \"openshift-config-operator-7777fb866f-5dxhf\" (UID: \"7422b464-53bc-4f4a-8734-bb9f8d5ca846\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.699323 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dplr4\" (UniqueName: \"kubernetes.io/projected/1e99fac6-cc0b-4c09-9268-d77c4ab4b936-kube-api-access-dplr4\") pod \"openshift-apiserver-operator-796bbdcf4f-cmzkl\" (UID: \"1e99fac6-cc0b-4c09-9268-d77c4ab4b936\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.711513 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.714804 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79nc\" (UniqueName: \"kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc\") pod \"oauth-openshift-558db77b4-5jg6l\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.737365 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkpb\" (UniqueName: \"kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb\") pod \"console-f9d7485db-7mvq2\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.758601 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86897\" (UniqueName: \"kubernetes.io/projected/56c32f18-c8bd-409c-9501-164a49a93dcf-kube-api-access-86897\") pod \"apiserver-7bbb656c7d-krkd9\" (UID: \"56c32f18-c8bd-409c-9501-164a49a93dcf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.761573 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.776499 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstk5\" (UniqueName: \"kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5\") pod \"route-controller-manager-6576b87f9c-mr825\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.794708 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwlwz\" (UniqueName: \"kubernetes.io/projected/125d3243-1198-4f7d-8930-d1890b5def2a-kube-api-access-hwlwz\") pod \"apiserver-76f77b778f-7djbs\" (UID: \"125d3243-1198-4f7d-8930-d1890b5def2a\") " pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.820610 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5ft\" (UniqueName: \"kubernetes.io/projected/1ff91b55-22e1-46ce-b31e-5235a1d5c6f3-kube-api-access-wv5ft\") pod \"kube-storage-version-migrator-operator-b67b599dd-rn246\" (UID: \"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.838686 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8lrv\" (UniqueName: \"kubernetes.io/projected/b9490f60-a23b-4f00-baaf-c981be5e60cb-kube-api-access-c8lrv\") pod \"authentication-operator-69f744f599-fv7st\" (UID: \"b9490f60-a23b-4f00-baaf-c981be5e60cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.845656 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.855123 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c183ccbe-bb04-4614-9f26-11266d34255b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dkpxf\" (UID: \"c183ccbe-bb04-4614-9f26-11266d34255b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.861222 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.878486 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnnjp\" (UniqueName: \"kubernetes.io/projected/096d4722-b423-4819-a8fb-61556963fd3a-kube-api-access-jnnjp\") pod \"machine-api-operator-5694c8668f-cfzn2\" (UID: \"096d4722-b423-4819-a8fb-61556963fd3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.894592 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncg6s\" (UniqueName: \"kubernetes.io/projected/5818841d-889c-49f1-96fc-efa5064f48b7-kube-api-access-ncg6s\") pod \"etcd-operator-b45778765-nqnqf\" (UID: \"5818841d-889c-49f1-96fc-efa5064f48b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.900994 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.920352 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.941443 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.957723 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.961254 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.966510 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.972915 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.981563 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.981443 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.994821 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txpmz\" (UniqueName: \"kubernetes.io/projected/89c433d9-cdda-4a3b-b82c-78e23f9d790b-kube-api-access-txpmz\") pod \"cluster-samples-operator-665b6dd947-t2pxx\" (UID: \"89c433d9-cdda-4a3b-b82c-78e23f9d790b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:30 crc kubenswrapper[5000]: I0105 21:36:30.995420 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.001366 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.003715 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.022388 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.041397 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.041627 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.059096 5000 request.go:700] Waited for 1.96229033s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.073603 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64r7\" (UniqueName: \"kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7\") pod \"controller-manager-879f6c89f-d5n4f\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.093491 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.094799 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-w7s2l\" (UID: \"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.114538 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtf9r\" (UniqueName: \"kubernetes.io/projected/67e26059-23ca-4086-bc5a-f935a4c403ca-kube-api-access-wtf9r\") pod \"machine-approver-56656f9798-c9pgf\" (UID: \"67e26059-23ca-4086-bc5a-f935a4c403ca\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.120923 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.140631 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.160782 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.175434 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.181361 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.200949 5000 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.219814 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.240579 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.241718 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.259948 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.281832 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g726\" (UniqueName: \"kubernetes.io/projected/7b6fd8ae-ef38-4894-b2dd-4336e25727c5-kube-api-access-5g726\") pod \"packageserver-d55dfcdfc-5x54p\" (UID: \"7b6fd8ae-ef38-4894-b2dd-4336e25727c5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.308449 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c5p2\" (UniqueName: \"kubernetes.io/projected/bbd8f69e-6058-44de-b1f5-b6a0b413c3aa-kube-api-access-2c5p2\") pod \"machine-config-controller-84d6567774-ddm6w\" (UID: \"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.317638 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.317974 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.318034 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.318636 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:38:33.318558125 +0000 UTC m=+268.274760594 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.318725 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.320029 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.322463 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.325246 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.335936 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvvm\" (UniqueName: \"kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm\") pod \"marketplace-operator-79b997595-fpmdv\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.348386 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.358705 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68k6l\" (UniqueName: \"kubernetes.io/projected/b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed-kube-api-access-68k6l\") pod \"ingress-operator-5b745b69d9-6fm8d\" (UID: \"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.360003 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.390017 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr6tm\" (UniqueName: \"kubernetes.io/projected/d97efce6-8e46-4981-ae4b-1d1d5b24bbf9-kube-api-access-gr6tm\") pod \"router-default-5444994796-fskst\" (UID: \"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9\") " pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.393631 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.396259 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwh5\" (UniqueName: \"kubernetes.io/projected/00165d41-af6c-406d-a288-ab9be66824b8-kube-api-access-mbwh5\") pod \"multus-admission-controller-857f4d67dd-hkznp\" (UID: \"00165d41-af6c-406d-a288-ab9be66824b8\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.418781 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.418798 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfdgm\" (UniqueName: \"kubernetes.io/projected/2edc99da-c399-450d-b55e-ac0c5ebe16af-kube-api-access-cfdgm\") pod \"machine-config-operator-74547568cd-sc8gc\" (UID: \"2edc99da-c399-450d-b55e-ac0c5ebe16af\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.418812 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.435201 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.435979 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.441702 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cm7q\" (UniqueName: \"kubernetes.io/projected/ca23e911-0c80-44ac-a1a4-ce0b242675f7-kube-api-access-5cm7q\") pod \"service-ca-9c57cc56f-8mlm7\" (UID: \"ca23e911-0c80-44ac-a1a4-ce0b242675f7\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.464086 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xgcp\" (UniqueName: \"kubernetes.io/projected/e757274f-5ba4-4aff-89ab-cb6887e52ad7-kube-api-access-8xgcp\") pod \"console-operator-58897d9998-pp4rh\" (UID: \"e757274f-5ba4-4aff-89ab-cb6887e52ad7\") " pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.520114 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.520175 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.520250 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.520277 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.521357 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.021342346 +0000 UTC m=+146.977544805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.522336 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.522455 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.522480 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.522516 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j26zk\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.544591 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.555231 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.571565 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.623930 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.624030 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.124011622 +0000 UTC m=+147.080214091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624396 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6feccfd7-da12-44e9-beb1-701899d9f4c1-cert\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624427 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjmb\" (UniqueName: \"kubernetes.io/projected/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-kube-api-access-dzjmb\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624471 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624487 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cln26\" (UniqueName: \"kubernetes.io/projected/81ff9dcc-be92-40cf-b45b-ba49fc78918a-kube-api-access-cln26\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624576 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-socket-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624622 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624643 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-config-volume\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624698 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624734 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-srv-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624770 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624826 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624866 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-node-bootstrap-token\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624914 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xt9\" (UniqueName: \"kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624949 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624967 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.624992 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-profile-collector-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625007 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837c014f-8de6-4bc9-883d-fe5833bf101a-serving-cert\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625025 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e435b45f-1446-4a36-afe1-86e1a057cbab-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625047 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625061 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-metrics-tls\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625086 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xhfr\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-kube-api-access-6xhfr\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625102 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9vb\" (UniqueName: \"kubernetes.io/projected/e435b45f-1446-4a36-afe1-86e1a057cbab-kube-api-access-7n9vb\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625142 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625168 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brpg\" (UniqueName: \"kubernetes.io/projected/6feccfd7-da12-44e9-beb1-701899d9f4c1-kube-api-access-2brpg\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625188 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625205 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ff9dcc-be92-40cf-b45b-ba49fc78918a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625223 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-plugins-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625238 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837c014f-8de6-4bc9-883d-fe5833bf101a-config\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625270 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lcd\" (UniqueName: \"kubernetes.io/projected/837c014f-8de6-4bc9-883d-fe5833bf101a-kube-api-access-72lcd\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625301 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625325 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625349 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxsh\" (UniqueName: \"kubernetes.io/projected/12d673e5-293f-4f9e-abe0-a92528bd45c3-kube-api-access-dhxsh\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625384 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swf9\" (UniqueName: \"kubernetes.io/projected/cf36cce9-c0d2-4250-b980-2bec1a306493-kube-api-access-4swf9\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625412 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfmt\" (UniqueName: \"kubernetes.io/projected/d750d899-ef96-457a-abb4-761a420bc277-kube-api-access-nqfmt\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625474 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d750d899-ef96-457a-abb4-761a420bc277-metrics-tls\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625501 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmlk\" (UniqueName: \"kubernetes.io/projected/e9e0853d-6397-42cc-9eaf-eda04f467b82-kube-api-access-lvmlk\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625554 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625611 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-certs\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625626 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-csi-data-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625662 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-srv-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625687 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhvv\" (UniqueName: \"kubernetes.io/projected/911aeee9-3b02-4f97-8b71-51e57c8cf02e-kube-api-access-nkhvv\") pod \"migrator-59844c95c7-l7jnf\" (UID: \"911aeee9-3b02-4f97-8b71-51e57c8cf02e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625722 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625739 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625772 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-mountpoint-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625787 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625846 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-registration-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625863 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625879 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625917 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j26zk\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.625934 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxgf\" (UniqueName: \"kubernetes.io/projected/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-kube-api-access-9zxgf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.626089 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlvj\" (UniqueName: \"kubernetes.io/projected/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-kube-api-access-2rlvj\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.630709 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.631368 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.131355041 +0000 UTC m=+147.087557510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.632421 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.632959 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.633547 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.634073 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.637502 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.637996 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.639039 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.654668 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.670345 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.676367 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.684363 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.699758 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j26zk\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733390 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733699 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-socket-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.733780 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.233759571 +0000 UTC m=+147.189962050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733825 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733858 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-config-volume\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733879 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733946 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-srv-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.733976 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-node-bootstrap-token\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734000 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xt9\" (UniqueName: \"kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734008 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-socket-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734026 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734106 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-profile-collector-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734144 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837c014f-8de6-4bc9-883d-fe5833bf101a-serving-cert\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734176 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e435b45f-1446-4a36-afe1-86e1a057cbab-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734211 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-metrics-tls\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734244 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734277 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xhfr\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-kube-api-access-6xhfr\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734321 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9vb\" (UniqueName: \"kubernetes.io/projected/e435b45f-1446-4a36-afe1-86e1a057cbab-kube-api-access-7n9vb\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734359 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2brpg\" (UniqueName: \"kubernetes.io/projected/6feccfd7-da12-44e9-beb1-701899d9f4c1-kube-api-access-2brpg\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734398 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734431 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ff9dcc-be92-40cf-b45b-ba49fc78918a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734465 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-plugins-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734494 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837c014f-8de6-4bc9-883d-fe5833bf101a-config\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734529 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lcd\" (UniqueName: \"kubernetes.io/projected/837c014f-8de6-4bc9-883d-fe5833bf101a-kube-api-access-72lcd\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734559 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734589 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734619 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxsh\" (UniqueName: \"kubernetes.io/projected/12d673e5-293f-4f9e-abe0-a92528bd45c3-kube-api-access-dhxsh\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734648 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swf9\" (UniqueName: \"kubernetes.io/projected/cf36cce9-c0d2-4250-b980-2bec1a306493-kube-api-access-4swf9\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734677 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfmt\" (UniqueName: \"kubernetes.io/projected/d750d899-ef96-457a-abb4-761a420bc277-kube-api-access-nqfmt\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734708 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d750d899-ef96-457a-abb4-761a420bc277-metrics-tls\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734739 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmlk\" (UniqueName: \"kubernetes.io/projected/e9e0853d-6397-42cc-9eaf-eda04f467b82-kube-api-access-lvmlk\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734785 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734837 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-csi-data-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734870 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-certs\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734913 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-srv-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734948 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhvv\" (UniqueName: \"kubernetes.io/projected/911aeee9-3b02-4f97-8b71-51e57c8cf02e-kube-api-access-nkhvv\") pod \"migrator-59844c95c7-l7jnf\" (UID: \"911aeee9-3b02-4f97-8b71-51e57c8cf02e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.734983 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-mountpoint-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735013 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735047 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735073 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735104 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-registration-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735137 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxgf\" (UniqueName: \"kubernetes.io/projected/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-kube-api-access-9zxgf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735175 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rlvj\" (UniqueName: \"kubernetes.io/projected/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-kube-api-access-2rlvj\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735205 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjmb\" (UniqueName: \"kubernetes.io/projected/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-kube-api-access-dzjmb\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735232 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6feccfd7-da12-44e9-beb1-701899d9f4c1-cert\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735266 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cln26\" (UniqueName: \"kubernetes.io/projected/81ff9dcc-be92-40cf-b45b-ba49fc78918a-kube-api-access-cln26\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.735298 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.737052 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-config-volume\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.738297 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.738366 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.738673 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.739129 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-csi-data-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.739732 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.745549 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-mountpoint-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.749236 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-plugins-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.750219 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.742402 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.750532 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.750618 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12d673e5-293f-4f9e-abe0-a92528bd45c3-registration-dir\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.750786 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf36cce9-c0d2-4250-b980-2bec1a306493-srv-cert\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.751056 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837c014f-8de6-4bc9-883d-fe5833bf101a-config\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.751564 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-profile-collector-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.752198 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.752348 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e435b45f-1446-4a36-afe1-86e1a057cbab-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.753406 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e0853d-6397-42cc-9eaf-eda04f467b82-srv-cert\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.753623 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-node-bootstrap-token\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.753660 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.253643298 +0000 UTC m=+147.209845847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.753980 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/837c014f-8de6-4bc9-883d-fe5833bf101a-serving-cert\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.755616 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.756958 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.757629 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-metrics-tls\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.759494 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81ff9dcc-be92-40cf-b45b-ba49fc78918a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.760404 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-certs\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.763056 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6feccfd7-da12-44e9-beb1-701899d9f4c1-cert\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.764598 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.764985 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d750d899-ef96-457a-abb4-761a420bc277-metrics-tls\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.770650 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tf7rj"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.791205 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.798606 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfmt\" (UniqueName: \"kubernetes.io/projected/d750d899-ef96-457a-abb4-761a420bc277-kube-api-access-nqfmt\") pod \"dns-operator-744455d44c-z86kg\" (UID: \"d750d899-ef96-457a-abb4-761a420bc277\") " pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.818950 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xt9\" (UniqueName: \"kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9\") pod \"collect-profiles-29460810-tr26l\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.840115 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.840591 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.340571026 +0000 UTC m=+147.296773495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.865918 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.867792 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6e45757-5dfb-4c3b-ba8f-f448b66eaa44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-prwld\" (UID: \"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.869823 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.875585 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhvv\" (UniqueName: \"kubernetes.io/projected/911aeee9-3b02-4f97-8b71-51e57c8cf02e-kube-api-access-nkhvv\") pod \"migrator-59844c95c7-l7jnf\" (UID: \"911aeee9-3b02-4f97-8b71-51e57c8cf02e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.881273 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9"] Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.896704 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9vb\" (UniqueName: \"kubernetes.io/projected/e435b45f-1446-4a36-afe1-86e1a057cbab-kube-api-access-7n9vb\") pod \"package-server-manager-789f6589d5-r8mhp\" (UID: \"e435b45f-1446-4a36-afe1-86e1a057cbab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.897054 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxsh\" (UniqueName: \"kubernetes.io/projected/12d673e5-293f-4f9e-abe0-a92528bd45c3-kube-api-access-dhxsh\") pod \"csi-hostpathplugin-9vkcb\" (UID: \"12d673e5-293f-4f9e-abe0-a92528bd45c3\") " pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.916619 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xhfr\" (UniqueName: \"kubernetes.io/projected/5aa88737-fa3d-4ebd-a9a0-90e709e47a01-kube-api-access-6xhfr\") pod \"cluster-image-registry-operator-dc59b4c8b-px2rz\" (UID: \"5aa88737-fa3d-4ebd-a9a0-90e709e47a01\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.941117 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:31 crc kubenswrapper[5000]: E0105 21:36:31.941723 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.441711739 +0000 UTC m=+147.397914208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.955673 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rlvj\" (UniqueName: \"kubernetes.io/projected/3e3c75b0-db4e-4fc0-93d2-f46f7fa62683-kube-api-access-2rlvj\") pod \"dns-default-nt45v\" (UID: \"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683\") " pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.963548 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lcd\" (UniqueName: \"kubernetes.io/projected/837c014f-8de6-4bc9-883d-fe5833bf101a-kube-api-access-72lcd\") pod \"service-ca-operator-777779d784-mggdq\" (UID: \"837c014f-8de6-4bc9-883d-fe5833bf101a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.978953 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" event={"ID":"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3","Type":"ContainerStarted","Data":"d6d8cb94ce45f90ca2634d2baaef10712c34b0bd01fa749550892539e2e908fd"} Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.985236 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" event={"ID":"d7313182-9b06-475a-a504-e5207fc2f330","Type":"ContainerStarted","Data":"eab596c296d3a81a38850bd1ca76881dbd436164a2401d5ff338965af0db7fee"} Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.988094 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tf7rj" event={"ID":"2245d315-61bc-4b08-8e67-ffb6f2b84674","Type":"ContainerStarted","Data":"12c582a1c43dcd73c0d9b54233af125d57edf10a98e69d80dc7a537984d59ba7"} Jan 05 21:36:31 crc kubenswrapper[5000]: I0105 21:36:31.996661 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2brpg\" (UniqueName: \"kubernetes.io/projected/6feccfd7-da12-44e9-beb1-701899d9f4c1-kube-api-access-2brpg\") pod \"ingress-canary-jfrdb\" (UID: \"6feccfd7-da12-44e9-beb1-701899d9f4c1\") " pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.007328 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fskst" event={"ID":"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9","Type":"ContainerStarted","Data":"9df689858f71f99dd0d8da082d077fb8d39829bbd73d0addf8696d765975087a"} Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.007406 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fskst" event={"ID":"d97efce6-8e46-4981-ae4b-1d1d5b24bbf9","Type":"ContainerStarted","Data":"13d85081ef44fc7330f8ec98fa242d336ac7fd3d6748a1083f69ab4d5b33ac06"} Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.011100 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" event={"ID":"67e26059-23ca-4086-bc5a-f935a4c403ca","Type":"ContainerStarted","Data":"3c2429ccedcfe42d65d107d11fc9ca07301566e164588b4b0aad33c0d1cf5c9b"} Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.011138 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" event={"ID":"67e26059-23ca-4086-bc5a-f935a4c403ca","Type":"ContainerStarted","Data":"41ed558ab4b9b3736994d8b9418aa16bb41c3d7205a7ab66d0cd034572e0c544"} Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.013159 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" event={"ID":"c661b9d0-ba17-41d2-94dd-f1c71fe529d0","Type":"ContainerStarted","Data":"cc84d78b788adf114ef8aef7b00706fd5c69a54e36075e9173b37de6e218d12c"} Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.014598 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.020482 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.027644 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmlk\" (UniqueName: \"kubernetes.io/projected/e9e0853d-6397-42cc-9eaf-eda04f467b82-kube-api-access-lvmlk\") pod \"catalog-operator-68c6474976-jbnc9\" (UID: \"e9e0853d-6397-42cc-9eaf-eda04f467b82\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.030459 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.048422 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.048780 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.548765931 +0000 UTC m=+147.504968400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.048831 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.055207 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.059595 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cln26\" (UniqueName: \"kubernetes.io/projected/81ff9dcc-be92-40cf-b45b-ba49fc78918a-kube-api-access-cln26\") pod \"control-plane-machine-set-operator-78cbb6b69f-2cjfv\" (UID: \"81ff9dcc-be92-40cf-b45b-ba49fc78918a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.059807 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.067568 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.071031 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.071574 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jfrdb" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.072953 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.076797 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cfzn2"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.080317 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swf9\" (UniqueName: \"kubernetes.io/projected/cf36cce9-c0d2-4250-b980-2bec1a306493-kube-api-access-4swf9\") pod \"olm-operator-6b444d44fb-xbzjp\" (UID: \"cf36cce9-c0d2-4250-b980-2bec1a306493\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.080408 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nqnqf"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.083422 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.083604 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.087137 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fv7st"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.087425 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.087878 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjmb\" (UniqueName: \"kubernetes.io/projected/f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794-kube-api-access-dzjmb\") pod \"machine-config-server-wsljz\" (UID: \"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794\") " pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.088327 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.089505 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7djbs"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.098346 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxgf\" (UniqueName: \"kubernetes.io/projected/02fda1fd-a80f-4025-8b13-bfdf75f8ea0a-kube-api-access-9zxgf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bs84b\" (UID: \"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.106375 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.152213 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.154680 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.155019 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.65500767 +0000 UTC m=+147.611210139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.175863 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7422b464_53bc_4f4a_8734_bb9f8d5ca846.slice/crio-b519733a909bdf31c166c5eb56401696b21e63ebdacdad31298706b20bb25e7d WatchSource:0}: Error finding container b519733a909bdf31c166c5eb56401696b21e63ebdacdad31298706b20bb25e7d: Status 404 returned error can't find the container with id b519733a909bdf31c166c5eb56401696b21e63ebdacdad31298706b20bb25e7d Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.176419 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9490f60_a23b_4f00_baaf_c981be5e60cb.slice/crio-eca1d4e1475f80e4d8fab5b56e3501299fb8e67ab8e7a68c3adbd42406dfcd02 WatchSource:0}: Error finding container eca1d4e1475f80e4d8fab5b56e3501299fb8e67ab8e7a68c3adbd42406dfcd02: Status 404 returned error can't find the container with id eca1d4e1475f80e4d8fab5b56e3501299fb8e67ab8e7a68c3adbd42406dfcd02 Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.188217 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc183ccbe_bb04_4614_9f26_11266d34255b.slice/crio-b60ce50dbcc81d264bab9268e8ba62f6f16691d14a72bd6a058bacc374b7c044 WatchSource:0}: Error finding container b60ce50dbcc81d264bab9268e8ba62f6f16691d14a72bd6a058bacc374b7c044: Status 404 returned error can't find the container with id b60ce50dbcc81d264bab9268e8ba62f6f16691d14a72bd6a058bacc374b7c044 Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.188978 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e99fac6_cc0b_4c09_9268_d77c4ab4b936.slice/crio-78481546c67e773ce7fbfcdb1a31a3aa0883a0090a8c313a37827194025b83c2 WatchSource:0}: Error finding container 78481546c67e773ce7fbfcdb1a31a3aa0883a0090a8c313a37827194025b83c2: Status 404 returned error can't find the container with id 78481546c67e773ce7fbfcdb1a31a3aa0883a0090a8c313a37827194025b83c2 Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.202480 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.242419 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.253975 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.255140 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.255473 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.755459763 +0000 UTC m=+147.711662232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.265043 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.298831 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.305832 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.342361 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.356825 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.357297 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.857282636 +0000 UTC m=+147.813485105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.380408 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wsljz" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.389842 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hkznp"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.420665 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.449533 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.458958 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.459309 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:32.959292014 +0000 UTC m=+147.915494483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.468404 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mlm7"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.492393 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mggdq"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.497255 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-pp4rh"] Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.498379 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edc99da_c399_450d_b55e_ac0c5ebe16af.slice/crio-543968f3be12939df087ffc9dff61d55a0036ae80f2e610de87f518d77af0713 WatchSource:0}: Error finding container 543968f3be12939df087ffc9dff61d55a0036ae80f2e610de87f518d77af0713: Status 404 returned error can't find the container with id 543968f3be12939df087ffc9dff61d55a0036ae80f2e610de87f518d77af0713 Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.507360 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb67f7862_6f4b_4a3e_b3ce_a1e91b8db2ed.slice/crio-2e75de0342d7feecebb1e990ec4bd6522b926842b6a0995aa55b879f2bf2e0f8 WatchSource:0}: Error finding container 2e75de0342d7feecebb1e990ec4bd6522b926842b6a0995aa55b879f2bf2e0f8: Status 404 returned error can't find the container with id 2e75de0342d7feecebb1e990ec4bd6522b926842b6a0995aa55b879f2bf2e0f8 Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.517838 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca23e911_0c80_44ac_a1a4_ce0b242675f7.slice/crio-63cb5dd385d838924709dea1dbb7e096f9ceb8680c2ee2f5a85bde53da3b6fc8 WatchSource:0}: Error finding container 63cb5dd385d838924709dea1dbb7e096f9ceb8680c2ee2f5a85bde53da3b6fc8: Status 404 returned error can't find the container with id 63cb5dd385d838924709dea1dbb7e096f9ceb8680c2ee2f5a85bde53da3b6fc8 Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.560249 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.560588 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.060576422 +0000 UTC m=+148.016778891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: W0105 21:36:32.631569 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode757274f_5ba4_4aff_89ab_cb6887e52ad7.slice/crio-444687c2bf32715b7bd0802d7a91f524d347713d5b5733d90ed18295c68b8a97 WatchSource:0}: Error finding container 444687c2bf32715b7bd0802d7a91f524d347713d5b5733d90ed18295c68b8a97: Status 404 returned error can't find the container with id 444687c2bf32715b7bd0802d7a91f524d347713d5b5733d90ed18295c68b8a97 Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.634739 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.635095 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.639060 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:32 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:32 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:32 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.639107 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.662506 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.662768 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.162743774 +0000 UTC m=+148.118946243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.764309 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.764636 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.264623899 +0000 UTC m=+148.220826368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.764881 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.867492 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.868186 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.36816659 +0000 UTC m=+148.324369059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.878018 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.898536 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l"] Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.969228 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:32 crc kubenswrapper[5000]: E0105 21:36:32.970693 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.470680083 +0000 UTC m=+148.426882552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:32 crc kubenswrapper[5000]: I0105 21:36:32.994181 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.003131 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.031727 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nt45v"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.066109 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.076221 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.076506 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.576490289 +0000 UTC m=+148.532692758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.087637 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" event={"ID":"d7313182-9b06-475a-a504-e5207fc2f330","Type":"ContainerStarted","Data":"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.088591 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.091240 5000 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5jg6l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.091276 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" podUID="d7313182-9b06-475a-a504-e5207fc2f330" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.091935 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z86kg"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.095423 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" event={"ID":"c183ccbe-bb04-4614-9f26-11266d34255b","Type":"ContainerStarted","Data":"b60ce50dbcc81d264bab9268e8ba62f6f16691d14a72bd6a058bacc374b7c044"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.097869 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"82b6acfe11bc3f64f92ccf6036d6eebbf98bd242136b4b3e79bdb38fe95c9595"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.099185 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" event={"ID":"00165d41-af6c-406d-a288-ab9be66824b8","Type":"ContainerStarted","Data":"7dea285d5fb32d00c7012c66384af97c246174eff4f97b7533af22e1766df06b"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.101066 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" event={"ID":"b9490f60-a23b-4f00-baaf-c981be5e60cb","Type":"ContainerStarted","Data":"b3aeed9d8c0442c21b97b1cf939117621451f45af5b6e568878aaa389d76af2c"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.101095 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" event={"ID":"b9490f60-a23b-4f00-baaf-c981be5e60cb","Type":"ContainerStarted","Data":"eca1d4e1475f80e4d8fab5b56e3501299fb8e67ab8e7a68c3adbd42406dfcd02"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.107093 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" event={"ID":"c661b9d0-ba17-41d2-94dd-f1c71fe529d0","Type":"ContainerStarted","Data":"6e8b9a2523ea6996d59527c4d44c62e033ef03c43c74d80503b737fda6e10e34"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.107877 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.133533 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.168700 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jfrdb"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.170222 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" event={"ID":"1ff91b55-22e1-46ce-b31e-5235a1d5c6f3","Type":"ContainerStarted","Data":"bd6c274ce04264e50943bfe85c13d51803683c01519178a0156f122975e59d8f"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.181967 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.184566 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.68455257 +0000 UTC m=+148.640755039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.186615 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d1507e8f168d908d8570cf740d57acb65f86cc2e809064c8d8d8190b6b734c6c"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.191516 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vkcb"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.218049 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" event={"ID":"125d3243-1198-4f7d-8930-d1890b5def2a","Type":"ContainerStarted","Data":"1d5912fdbe56300b258b819262851d94a59038ab43d5a9eec843b98283954817"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.243612 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f094d57082969500c7bb059d4e8c4854d813d38667c66e6b74fcd6345d9ce2cd"} Jan 05 21:36:33 crc kubenswrapper[5000]: W0105 21:36:33.251118 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3f3f16b_c9fd_4fcf_ba7d_50fd1bd91794.slice/crio-48f4b66f8533ca94f954903ffeceb9e8c08629f79015ddc04e33b63ba6965064 WatchSource:0}: Error finding container 48f4b66f8533ca94f954903ffeceb9e8c08629f79015ddc04e33b63ba6965064: Status 404 returned error can't find the container with id 48f4b66f8533ca94f954903ffeceb9e8c08629f79015ddc04e33b63ba6965064 Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.251776 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" event={"ID":"e435b45f-1446-4a36-afe1-86e1a057cbab","Type":"ContainerStarted","Data":"dd14eb601dc2c9554af4c4e31e7acd3c3ec00aa09d8581df3b342e482969d74c"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.257111 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" event={"ID":"7f1846c9-70fd-44b0-8ea0-f0d67a308185","Type":"ContainerStarted","Data":"6b00f8ad9ca912c2b54bd76506f8675b456e4d0ffbffa730c59395705cd5fe89"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.261145 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" event={"ID":"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44","Type":"ContainerStarted","Data":"72a4401c05ff1e8a5bf1dac7290a2d49f0359067958236f6e1e2f7aa600455ad"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.263812 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" event={"ID":"67e26059-23ca-4086-bc5a-f935a4c403ca","Type":"ContainerStarted","Data":"4d02dfe0e213178a00044f005c1fe6958a0559b688886ce166b67e73cfab1b17"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.266729 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" event={"ID":"1e99fac6-cc0b-4c09-9268-d77c4ab4b936","Type":"ContainerStarted","Data":"80e9450634ab88171538db7624206fd055d36da1792b2bff5aba4319ab8c45a3"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.266764 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" event={"ID":"1e99fac6-cc0b-4c09-9268-d77c4ab4b936","Type":"ContainerStarted","Data":"78481546c67e773ce7fbfcdb1a31a3aa0883a0090a8c313a37827194025b83c2"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.268475 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.287753 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.289685 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.789654306 +0000 UTC m=+148.745856775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.310013 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" event={"ID":"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c","Type":"ContainerStarted","Data":"a5dd49a59ccffadff7f72a5efe933ba128d5cbdde096c2125197c125bbb57faf"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.316909 5000 generic.go:334] "Generic (PLEG): container finished" podID="56c32f18-c8bd-409c-9501-164a49a93dcf" containerID="2468d18c0edd04dec037b62f96bd1cf158370c097cefcb2f5a06ff195cc213da" exitCode=0 Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.316979 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" event={"ID":"56c32f18-c8bd-409c-9501-164a49a93dcf","Type":"ContainerDied","Data":"2468d18c0edd04dec037b62f96bd1cf158370c097cefcb2f5a06ff195cc213da"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.317007 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" event={"ID":"56c32f18-c8bd-409c-9501-164a49a93dcf","Type":"ContainerStarted","Data":"67ed1a56ffc65e5604910ffc7d6c54d24187777bcb5c4fd3af4197bc23625744"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.348787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" event={"ID":"7b6fd8ae-ef38-4894-b2dd-4336e25727c5","Type":"ContainerStarted","Data":"c7617f62abc8152cb084fab675c64fab0f227021f8c69010ea8fd9486e349fab"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.348829 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.356716 5000 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5x54p container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.356979 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" podUID="7b6fd8ae-ef38-4894-b2dd-4336e25727c5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.357474 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7mvq2" event={"ID":"71825513-a9cf-4528-962f-b0c05006bdcd","Type":"ContainerStarted","Data":"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.357509 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7mvq2" event={"ID":"71825513-a9cf-4528-962f-b0c05006bdcd","Type":"ContainerStarted","Data":"868c2cfed8f1ad98c18fc1def29e08dbeec7fb8c51b99b7cc4317c6c9b2380f2"} Jan 05 21:36:33 crc kubenswrapper[5000]: W0105 21:36:33.368209 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e3c75b0_db4e_4fc0_93d2_f46f7fa62683.slice/crio-0815faf6745935c78aa44b37991c3882549c9b2cd433a1c646b449e63d93b828 WatchSource:0}: Error finding container 0815faf6745935c78aa44b37991c3882549c9b2cd433a1c646b449e63d93b828: Status 404 returned error can't find the container with id 0815faf6745935c78aa44b37991c3882549c9b2cd433a1c646b449e63d93b828 Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.374063 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" event={"ID":"e757274f-5ba4-4aff-89ab-cb6887e52ad7","Type":"ContainerStarted","Data":"444687c2bf32715b7bd0802d7a91f524d347713d5b5733d90ed18295c68b8a97"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.382431 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" event={"ID":"ca23e911-0c80-44ac-a1a4-ce0b242675f7","Type":"ContainerStarted","Data":"63cb5dd385d838924709dea1dbb7e096f9ceb8680c2ee2f5a85bde53da3b6fc8"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.388942 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.390533 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.890519692 +0000 UTC m=+148.846722161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.432561 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" event={"ID":"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed","Type":"ContainerStarted","Data":"2e75de0342d7feecebb1e990ec4bd6522b926842b6a0995aa55b879f2bf2e0f8"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.452140 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" event={"ID":"89c433d9-cdda-4a3b-b82c-78e23f9d790b","Type":"ContainerStarted","Data":"d3e5d616523a6403185abf6c2a69047f1ae1ca6ce43a405a54faa11ee189363d"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.475252 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" podStartSLOduration=124.475237067 podStartE2EDuration="2m4.475237067s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.474192477 +0000 UTC m=+148.430394936" watchObservedRunningTime="2026-01-05 21:36:33.475237067 +0000 UTC m=+148.431439536" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.487151 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" event={"ID":"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa","Type":"ContainerStarted","Data":"d2b9565822b74fe6b94f5b258e73a14cf31115d13b39b52c482fb8d74e1e107a"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.490233 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.491321 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:33.991298705 +0000 UTC m=+148.947501174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: W0105 21:36:33.494169 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12d673e5_293f_4f9e_abe0_a92528bd45c3.slice/crio-6f8b17805f2ab22440ed357295c5b40e47be8a80346cfa434a86348c3ebc32af WatchSource:0}: Error finding container 6f8b17805f2ab22440ed357295c5b40e47be8a80346cfa434a86348c3ebc32af: Status 404 returned error can't find the container with id 6f8b17805f2ab22440ed357295c5b40e47be8a80346cfa434a86348c3ebc32af Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.519533 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" event={"ID":"7422b464-53bc-4f4a-8734-bb9f8d5ca846","Type":"ContainerStarted","Data":"b519733a909bdf31c166c5eb56401696b21e63ebdacdad31298706b20bb25e7d"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.525014 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" podStartSLOduration=124.524995885 podStartE2EDuration="2m4.524995885s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.519794637 +0000 UTC m=+148.475997106" watchObservedRunningTime="2026-01-05 21:36:33.524995885 +0000 UTC m=+148.481198354" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.534163 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" event={"ID":"837c014f-8de6-4bc9-883d-fe5833bf101a","Type":"ContainerStarted","Data":"779c046ab5f0173788db9477d1cb7d360763ea9cb2fa1b1e04d6a269c0dd565a"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.546623 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" event={"ID":"bbe6c3a1-1534-4095-9e25-1f4ce093938e","Type":"ContainerStarted","Data":"52b9e31bce1cc59a27ddd112783506b1edcd5f9243ef2dc48cc4c92f6550bba3"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.548014 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.551216 5000 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mr825 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.551284 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.595540 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.596008 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.095991129 +0000 UTC m=+149.052193598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.596560 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" event={"ID":"2edc99da-c399-450d-b55e-ac0c5ebe16af","Type":"ContainerStarted","Data":"543968f3be12939df087ffc9dff61d55a0036ae80f2e610de87f518d77af0713"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.596714 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b"] Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.597735 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" event={"ID":"5818841d-889c-49f1-96fc-efa5064f48b7","Type":"ContainerStarted","Data":"524f0ecd029bdd7b2631c37871ebb98e6272583c76fe6280b0e4f1089bbe9659"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.603606 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" event={"ID":"096d4722-b423-4819-a8fb-61556963fd3a","Type":"ContainerStarted","Data":"323496538def1b601785cb342df16807388c843fc6905d12d49b63f44954040a"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.617266 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cmzkl" podStartSLOduration=124.617250245 podStartE2EDuration="2m4.617250245s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.616471973 +0000 UTC m=+148.572674442" watchObservedRunningTime="2026-01-05 21:36:33.617250245 +0000 UTC m=+148.573452714" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.618701 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tf7rj" event={"ID":"2245d315-61bc-4b08-8e67-ffb6f2b84674","Type":"ContainerStarted","Data":"faa520a8a54024683efe242644d3b428dbec720adf4bf36410d4ebc0d7915574"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.619985 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.636577 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:33 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:33 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:33 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.636929 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.639903 5000 patch_prober.go:28] interesting pod/downloads-7954f5f757-tf7rj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.639940 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tf7rj" podUID="2245d315-61bc-4b08-8e67-ffb6f2b84674" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.651423 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rn246" podStartSLOduration=124.651401479 podStartE2EDuration="2m4.651401479s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.640490188 +0000 UTC m=+148.596692657" watchObservedRunningTime="2026-01-05 21:36:33.651401479 +0000 UTC m=+148.607603958" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.679239 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" event={"ID":"911aeee9-3b02-4f97-8b71-51e57c8cf02e","Type":"ContainerStarted","Data":"96853751eaab260f1f7869468e73914840f7c522cc550e68d75ec636d600bf2e"} Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.705047 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.705568 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.205555223 +0000 UTC m=+149.161757692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.707757 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.724930 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.224903974 +0000 UTC m=+149.181106443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.726857 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7mvq2" podStartSLOduration=124.726838579 podStartE2EDuration="2m4.726838579s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.699607023 +0000 UTC m=+148.655809492" watchObservedRunningTime="2026-01-05 21:36:33.726838579 +0000 UTC m=+148.683041048" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.728183 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" podStartSLOduration=124.728176127 podStartE2EDuration="2m4.728176127s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.726189471 +0000 UTC m=+148.682391940" watchObservedRunningTime="2026-01-05 21:36:33.728176127 +0000 UTC m=+148.684378596" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.826671 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.827723 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.327708745 +0000 UTC m=+149.283911214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.844018 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fv7st" podStartSLOduration=124.843999879 podStartE2EDuration="2m4.843999879s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.776600748 +0000 UTC m=+148.732803217" watchObservedRunningTime="2026-01-05 21:36:33.843999879 +0000 UTC m=+148.800202348" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.844372 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9pgf" podStartSLOduration=124.84436646 podStartE2EDuration="2m4.84436646s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.839419129 +0000 UTC m=+148.795621608" watchObservedRunningTime="2026-01-05 21:36:33.84436646 +0000 UTC m=+148.800568929" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.890769 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fskst" podStartSLOduration=124.890754612 podStartE2EDuration="2m4.890754612s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.88998829 +0000 UTC m=+148.846190759" watchObservedRunningTime="2026-01-05 21:36:33.890754612 +0000 UTC m=+148.846957081" Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.931618 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:33 crc kubenswrapper[5000]: W0105 21:36:33.935011 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02fda1fd_a80f_4025_8b13_bfdf75f8ea0a.slice/crio-62db4bd7d19fad5f3a7ad7afde1bf5e7fe170cd17306b627014d955d2c458a66 WatchSource:0}: Error finding container 62db4bd7d19fad5f3a7ad7afde1bf5e7fe170cd17306b627014d955d2c458a66: Status 404 returned error can't find the container with id 62db4bd7d19fad5f3a7ad7afde1bf5e7fe170cd17306b627014d955d2c458a66 Jan 05 21:36:33 crc kubenswrapper[5000]: E0105 21:36:33.944597 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.444572827 +0000 UTC m=+149.400775296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:33 crc kubenswrapper[5000]: I0105 21:36:33.963922 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tf7rj" podStartSLOduration=124.963904578 podStartE2EDuration="2m4.963904578s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.924262248 +0000 UTC m=+148.880464707" watchObservedRunningTime="2026-01-05 21:36:33.963904578 +0000 UTC m=+148.920107047" Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.033321 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.033614 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.533525132 +0000 UTC m=+149.489727611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.033926 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.036544 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.536528788 +0000 UTC m=+149.492731327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.138385 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.139163 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.639129953 +0000 UTC m=+149.595332432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.139366 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.139696 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.639685999 +0000 UTC m=+149.595888468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.241104 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.241524 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.741508721 +0000 UTC m=+149.697711190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.342633 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.343166 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.843146959 +0000 UTC m=+149.799349478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.444108 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.444399 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:34.944384505 +0000 UTC m=+149.900586974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.545840 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.546237 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.046225658 +0000 UTC m=+150.002428127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.642125 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:34 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:34 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:34 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.642161 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.647264 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.647828 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.147813434 +0000 UTC m=+150.104015903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.749717 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.750047 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.250035219 +0000 UTC m=+150.206237688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.756319 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" event={"ID":"77750436-ae8c-4ab3-9647-dfd13c2822c6","Type":"ContainerStarted","Data":"fb6465f66c0cd2329f1b84db157a05851c9f56e3d7d1b965b0ee93bc05230c7c"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.756367 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" event={"ID":"77750436-ae8c-4ab3-9647-dfd13c2822c6","Type":"ContainerStarted","Data":"52d18f64552e3ebdea8695cf39e2bd6d2738af513cfdebf7813867a0cc8c01a0"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.760071 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt45v" event={"ID":"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683","Type":"ContainerStarted","Data":"0815faf6745935c78aa44b37991c3882549c9b2cd433a1c646b449e63d93b828"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.765568 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"80277d5102cc103884a3debb6212b31c4c8ae1dd7af05e3111d580d7e0550e70"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.788596 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" podStartSLOduration=125.788579037 podStartE2EDuration="2m5.788579037s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:33.964212486 +0000 UTC m=+148.920414955" watchObservedRunningTime="2026-01-05 21:36:34.788579037 +0000 UTC m=+149.744781506" Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.789328 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" podStartSLOduration=125.789288508 podStartE2EDuration="2m5.789288508s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:34.787343582 +0000 UTC m=+149.743546051" watchObservedRunningTime="2026-01-05 21:36:34.789288508 +0000 UTC m=+149.745490977" Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.839565 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" event={"ID":"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed","Type":"ContainerStarted","Data":"838d97f06b99f03e2a84434152e547498bfb991b0e8fda027c34e30a12a46ebc"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.844936 5000 generic.go:334] "Generic (PLEG): container finished" podID="125d3243-1198-4f7d-8930-d1890b5def2a" containerID="cae04415d561b1af29475e8b3088282323f9dfc48e42d721aafb3849aac70ef1" exitCode=0 Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.845012 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" event={"ID":"125d3243-1198-4f7d-8930-d1890b5def2a","Type":"ContainerDied","Data":"cae04415d561b1af29475e8b3088282323f9dfc48e42d721aafb3849aac70ef1"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.848177 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" event={"ID":"cf36cce9-c0d2-4250-b980-2bec1a306493","Type":"ContainerStarted","Data":"595e2c74b3bbd9f66d2866adb72889b388c69e772ff45c8bb4893ba22387bab6"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.850397 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.850466 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.350450561 +0000 UTC m=+150.306653030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.856637 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.858073 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.358052428 +0000 UTC m=+150.314254927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.907178 5000 generic.go:334] "Generic (PLEG): container finished" podID="7422b464-53bc-4f4a-8734-bb9f8d5ca846" containerID="78432a4585b992f37e03e7f48c6a8ccc49310d6c3061d7aedf48a7ef5cef6ff3" exitCode=0 Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.907267 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" event={"ID":"7422b464-53bc-4f4a-8734-bb9f8d5ca846","Type":"ContainerDied","Data":"78432a4585b992f37e03e7f48c6a8ccc49310d6c3061d7aedf48a7ef5cef6ff3"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.964056 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" event={"ID":"ca23e911-0c80-44ac-a1a4-ce0b242675f7","Type":"ContainerStarted","Data":"2bf8d0549be08360bb2fc2d80d5763cfcef2f3890bf41e4bb65a0f1f00be1dd4"} Jan 05 21:36:34 crc kubenswrapper[5000]: I0105 21:36:34.967304 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:34 crc kubenswrapper[5000]: E0105 21:36:34.968468 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.468449275 +0000 UTC m=+150.424651744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.040653 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" event={"ID":"89c433d9-cdda-4a3b-b82c-78e23f9d790b","Type":"ContainerStarted","Data":"1e25e43ff99d81c499a4421ad5407d73915f663b605728ca6ddb18b3cb56bbb5"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.045792 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" event={"ID":"e757274f-5ba4-4aff-89ab-cb6887e52ad7","Type":"ContainerStarted","Data":"142c0f74e3a3488b4074a77ed6e2f4bc22e8d4a7a8a5ec51892523b1e56aec3b"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.045872 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.049780 5000 patch_prober.go:28] interesting pod/console-operator-58897d9998-pp4rh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.049833 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" podUID="e757274f-5ba4-4aff-89ab-cb6887e52ad7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.069038 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wsljz" event={"ID":"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794","Type":"ContainerStarted","Data":"48f4b66f8533ca94f954903ffeceb9e8c08629f79015ddc04e33b63ba6965064"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.070299 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.070587 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.570576546 +0000 UTC m=+150.526779015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.075227 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bbe16b0a82265dea858087588b1eb835c3acd9a8a02b92d9876cd7c6519f373f"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.076324 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.080099 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8mlm7" podStartSLOduration=126.080083107 podStartE2EDuration="2m6.080083107s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.024284107 +0000 UTC m=+149.980486576" watchObservedRunningTime="2026-01-05 21:36:35.080083107 +0000 UTC m=+150.036285566" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.082142 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" podStartSLOduration=126.082128406 podStartE2EDuration="2m6.082128406s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.078957885 +0000 UTC m=+150.035160344" watchObservedRunningTime="2026-01-05 21:36:35.082128406 +0000 UTC m=+150.038330875" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.087216 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" event={"ID":"2edc99da-c399-450d-b55e-ac0c5ebe16af","Type":"ContainerStarted","Data":"fe80c3ec8f7fde3972067ddb86651cb781430410d3024cce466c02b63b28364f"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.088733 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" event={"ID":"5818841d-889c-49f1-96fc-efa5064f48b7","Type":"ContainerStarted","Data":"55082412317ccf18ef1693dc73905ff562706a8155ed09e94032d1928ee864f8"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.143337 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a88df62218e27b38aa0392ecc16bc27d2d9a676ac8f86e9ed77935cf3ac95d39"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.167327 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" event={"ID":"032d4ba5-1cda-4ab2-98ae-3fdb3ba89a5c","Type":"ContainerStarted","Data":"09decee7b3e028d4f174017623722f27224a49e6bded9c11ae3b8da5e03d163d"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.171038 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.172131 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.67211472 +0000 UTC m=+150.628317189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.198383 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" event={"ID":"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa","Type":"ContainerStarted","Data":"c49beb7bd97fcff7455169f1214207a4f9a46bb55f0cb09354828e49b687b382"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.205131 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nqnqf" podStartSLOduration=126.205110841 podStartE2EDuration="2m6.205110841s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.154935531 +0000 UTC m=+150.111138000" watchObservedRunningTime="2026-01-05 21:36:35.205110841 +0000 UTC m=+150.161313310" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.230881 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" event={"ID":"096d4722-b423-4819-a8fb-61556963fd3a","Type":"ContainerStarted","Data":"c0d8812b28ea53954e7ee9a7328ffbdfc3228243ad186e55babfb13a336e8403"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.259971 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" podStartSLOduration=126.259948834 podStartE2EDuration="2m6.259948834s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.259187242 +0000 UTC m=+150.215389721" watchObservedRunningTime="2026-01-05 21:36:35.259948834 +0000 UTC m=+150.216151323" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.272148 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.273949 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.773934803 +0000 UTC m=+150.730137272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.276064 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" event={"ID":"c183ccbe-bb04-4614-9f26-11266d34255b","Type":"ContainerStarted","Data":"d06de74290fa28737eeeb504aef39640c7d5b8527f2887f86e72b58f75ae5414"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.280428 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jfrdb" event={"ID":"6feccfd7-da12-44e9-beb1-701899d9f4c1","Type":"ContainerStarted","Data":"a0f24d94c2a5e62b3c4ba6b41ca00bfd2acc87a703f1545c3637e97039900122"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.287424 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" event={"ID":"12d673e5-293f-4f9e-abe0-a92528bd45c3","Type":"ContainerStarted","Data":"6f8b17805f2ab22440ed357295c5b40e47be8a80346cfa434a86348c3ebc32af"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.287562 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-w7s2l" podStartSLOduration=126.287545361 podStartE2EDuration="2m6.287545361s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.285554304 +0000 UTC m=+150.241756773" watchObservedRunningTime="2026-01-05 21:36:35.287545361 +0000 UTC m=+150.243747830" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.353941 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dkpxf" podStartSLOduration=126.353920893 podStartE2EDuration="2m6.353920893s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.330212237 +0000 UTC m=+150.286414736" watchObservedRunningTime="2026-01-05 21:36:35.353920893 +0000 UTC m=+150.310123362" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.357270 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" event={"ID":"d750d899-ef96-457a-abb4-761a420bc277","Type":"ContainerStarted","Data":"8f2457a4fef0d645496d261d496b9e908cedd2682fde42f8a2c49dca2df685f9"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.373788 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.374275 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.874251883 +0000 UTC m=+150.830454352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.374787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" event={"ID":"837c014f-8de6-4bc9-883d-fe5833bf101a","Type":"ContainerStarted","Data":"95208cba95fa61f2e77776a27c74654bc0ca6643943bcaeb58a82f9811169785"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.402279 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" event={"ID":"bbe6c3a1-1534-4095-9e25-1f4ce093938e","Type":"ContainerStarted","Data":"bd405d30ee6bff63ae848c7ad9bdfd880f9a3294acc48fdc11ec2dfc8c8753ac"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.432054 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" event={"ID":"911aeee9-3b02-4f97-8b71-51e57c8cf02e","Type":"ContainerStarted","Data":"2c5f33ede350877a1320543543788412fd935f5b5135e89a5e4177b67448e016"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.432389 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.466084 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jfrdb" podStartSLOduration=7.46606772 podStartE2EDuration="7.46606772s" podCreationTimestamp="2026-01-05 21:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.369470366 +0000 UTC m=+150.325672835" watchObservedRunningTime="2026-01-05 21:36:35.46606772 +0000 UTC m=+150.422270189" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.467136 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" event={"ID":"81ff9dcc-be92-40cf-b45b-ba49fc78918a","Type":"ContainerStarted","Data":"a487af252bf80423ced2e96db3db6427f7d8fa69c78626d507285275e88b769a"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.477082 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.480721 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:35.980707077 +0000 UTC m=+150.936909546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.494385 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" event={"ID":"7b6fd8ae-ef38-4894-b2dd-4336e25727c5","Type":"ContainerStarted","Data":"3bf968bc73ab483afbda1eb64fac4b2c180519925e75e7a02aa353f1bf2ab6d1"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.503816 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" event={"ID":"e9e0853d-6397-42cc-9eaf-eda04f467b82","Type":"ContainerStarted","Data":"0ce963a117c1dc8b7e8966ecd6b48ebdcb027b31feacb9b24db8d806cf11e549"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.504564 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.508815 5000 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jbnc9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.509584 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" podUID="e9e0853d-6397-42cc-9eaf-eda04f467b82" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.564646 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" podStartSLOduration=126.56463038 podStartE2EDuration="2m6.56463038s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.466460591 +0000 UTC m=+150.422663060" watchObservedRunningTime="2026-01-05 21:36:35.56463038 +0000 UTC m=+150.520832849" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.566198 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mggdq" podStartSLOduration=126.566190614 podStartE2EDuration="2m6.566190614s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.564162827 +0000 UTC m=+150.520365296" watchObservedRunningTime="2026-01-05 21:36:35.566190614 +0000 UTC m=+150.522393083" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.568534 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" event={"ID":"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a","Type":"ContainerStarted","Data":"62db4bd7d19fad5f3a7ad7afde1bf5e7fe170cd17306b627014d955d2c458a66"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.572725 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" event={"ID":"5aa88737-fa3d-4ebd-a9a0-90e709e47a01","Type":"ContainerStarted","Data":"56b7fbe072e055d3eb11d15c18b3ee97ecafd4c3be3063f3e61d0afdfa7ddef3"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.577601 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.577883 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.077861327 +0000 UTC m=+151.034063796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.578072 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.578413 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.078405963 +0000 UTC m=+151.034608422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.580453 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" event={"ID":"7f1846c9-70fd-44b0-8ea0-f0d67a308185","Type":"ContainerStarted","Data":"2ccbdb54d59ba15bd05e8e24636a470255c914a0a902e132f1cb888f6ed9ffb6"} Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.581763 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.582773 5000 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fpmdv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.582943 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.583077 5000 patch_prober.go:28] interesting pod/downloads-7954f5f757-tf7rj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.583154 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tf7rj" podUID="2245d315-61bc-4b08-8e67-ffb6f2b84674" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.583164 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5x54p" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.592049 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.679667 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.681148 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.18110513 +0000 UTC m=+151.137307599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.684207 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:35 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:35 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:35 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.684259 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.783498 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.787683 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.287670078 +0000 UTC m=+151.243872547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.842667 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" podStartSLOduration=126.842654586 podStartE2EDuration="2m6.842654586s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.786281649 +0000 UTC m=+150.742484118" watchObservedRunningTime="2026-01-05 21:36:35.842654586 +0000 UTC m=+150.798857055" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.873448 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" podStartSLOduration=126.873426853 podStartE2EDuration="2m6.873426853s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.87158048 +0000 UTC m=+150.827782959" watchObservedRunningTime="2026-01-05 21:36:35.873426853 +0000 UTC m=+150.829629322" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.886336 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.886959 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.386943458 +0000 UTC m=+151.343145927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.915293 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" podStartSLOduration=126.915275786 podStartE2EDuration="2m6.915275786s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:35.914191305 +0000 UTC m=+150.870393794" watchObservedRunningTime="2026-01-05 21:36:35.915275786 +0000 UTC m=+150.871478255" Jan 05 21:36:35 crc kubenswrapper[5000]: I0105 21:36:35.987554 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:35 crc kubenswrapper[5000]: E0105 21:36:35.987987 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.487971739 +0000 UTC m=+151.444174208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.102788 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.103301 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.603285146 +0000 UTC m=+151.559487615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.207082 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.207368 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.707356042 +0000 UTC m=+151.663558511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.249028 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" podStartSLOduration=127.24901191 podStartE2EDuration="2m7.24901191s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.197470741 +0000 UTC m=+151.153673210" watchObservedRunningTime="2026-01-05 21:36:36.24901191 +0000 UTC m=+151.205214379" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.307679 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.307777 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.807759685 +0000 UTC m=+151.763962164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.307847 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.308168 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.808156726 +0000 UTC m=+151.764359195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.409513 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.409682 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.90965671 +0000 UTC m=+151.865859179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.410432 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.410845 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:36.910830853 +0000 UTC m=+151.867033322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.511078 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.511317 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.011302948 +0000 UTC m=+151.967505417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.612490 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.612854 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.112837962 +0000 UTC m=+152.069040441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.623295 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jfrdb" event={"ID":"6feccfd7-da12-44e9-beb1-701899d9f4c1","Type":"ContainerStarted","Data":"bb42a356ec798c07b607e4a3a01baf03764146a9d6e3fac3f2c1bfa6548c6bea"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.625248 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-px2rz" event={"ID":"5aa88737-fa3d-4ebd-a9a0-90e709e47a01","Type":"ContainerStarted","Data":"4f5551860eb1e519c04189a405b404a2567ee41357db2f5d82310606ec2a9adb"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.631171 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ddm6w" event={"ID":"bbd8f69e-6058-44de-b1f5-b6a0b413c3aa","Type":"ContainerStarted","Data":"fea5f3ba4d6e212f6ff5d049853c2377866bff0dd6fc6cb8850cbd712b67e128"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.637000 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:36 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:36 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:36 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.637058 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.646247 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" event={"ID":"a6e45757-5dfb-4c3b-ba8f-f448b66eaa44","Type":"ContainerStarted","Data":"79d4750904964251d0008f0003726c7f3f63d13d84fb2c0baf57abf51d629356"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.653291 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cfzn2" event={"ID":"096d4722-b423-4819-a8fb-61556963fd3a","Type":"ContainerStarted","Data":"caf2a694f3daf5533057929ad39433514641f86ef1bb19fe44e7d7d62fa112fb"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.665736 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" event={"ID":"911aeee9-3b02-4f97-8b71-51e57c8cf02e","Type":"ContainerStarted","Data":"91ba5abb7420f344cc54e274a82fcc2b2730e4850f8435b9f931cd90f56972be"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.680697 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" event={"ID":"7422b464-53bc-4f4a-8734-bb9f8d5ca846","Type":"ContainerStarted","Data":"65de392871d673b3e944eb6455f69c5396dca9cfeaf1fd83885bd03b150fbaa6"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.681114 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.692242 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt45v" event={"ID":"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683","Type":"ContainerStarted","Data":"44c77def3b731a20fda6d33c5667785b185934029d665eca2296d5ad146a7b9a"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.692289 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nt45v" event={"ID":"3e3c75b0-db4e-4fc0-93d2-f46f7fa62683","Type":"ContainerStarted","Data":"65e21ee366b30c5721360a42b5787548ee39bb545b4ae2989f315c8807aafe61"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.692829 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.697284 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-prwld" podStartSLOduration=127.697270479 podStartE2EDuration="2m7.697270479s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.693327757 +0000 UTC m=+151.649530226" watchObservedRunningTime="2026-01-05 21:36:36.697270479 +0000 UTC m=+151.653472948" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.713468 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.713945 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.213928914 +0000 UTC m=+152.170131383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.732784 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" event={"ID":"e435b45f-1446-4a36-afe1-86e1a057cbab","Type":"ContainerStarted","Data":"eea284f5deb7d082082127e3b66323c25d9302f2213fa2456fce8c44386eed5d"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.732835 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" event={"ID":"e435b45f-1446-4a36-afe1-86e1a057cbab","Type":"ContainerStarted","Data":"b8c67fd91cbed9bf2e6accfaafdfb39bad018054c2cc731498eb5d465e1be68c"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.733840 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.737912 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2cjfv" event={"ID":"81ff9dcc-be92-40cf-b45b-ba49fc78918a","Type":"ContainerStarted","Data":"a7e7ac259647f173d60c1d91db88666c41deb3672950e05c074a2cdb34956caa"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.764124 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l7jnf" podStartSLOduration=127.764107535 podStartE2EDuration="2m7.764107535s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.746218264 +0000 UTC m=+151.702420733" watchObservedRunningTime="2026-01-05 21:36:36.764107535 +0000 UTC m=+151.720309994" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.764143 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" event={"ID":"12d673e5-293f-4f9e-abe0-a92528bd45c3","Type":"ContainerStarted","Data":"300b8c5cdbd32257596704e3f7f485b8380250e8896589d63aab9179c3248034"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.766701 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" event={"ID":"125d3243-1198-4f7d-8930-d1890b5def2a","Type":"ContainerStarted","Data":"c249479be6f71b1ab011baec9e639264c28f8d5419b8d32eca384a59687dffe4"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.775723 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" event={"ID":"00165d41-af6c-406d-a288-ab9be66824b8","Type":"ContainerStarted","Data":"dc93160096fbd0e93694e80d5382ed64760f3b770b747d5bda723f8a9ffb4502"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.775770 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" event={"ID":"00165d41-af6c-406d-a288-ab9be66824b8","Type":"ContainerStarted","Data":"cb5557a7b22200d1dbc9622115ee4685b4b6183fee14c97a450df542ea1aa614"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.777548 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" event={"ID":"b67f7862-6f4b-4a3e-b3ce-a1e91b8db2ed","Type":"ContainerStarted","Data":"83e2fed010ee5b6a7aeba6c802c7926d44f268a28ed6d7f369acbd672bfd131b"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.781109 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" event={"ID":"89c433d9-cdda-4a3b-b82c-78e23f9d790b","Type":"ContainerStarted","Data":"8480704979d0f009905585a8b9f71fec0e13caf226b3e0d45a492ab9b0a04dab"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.815629 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.816531 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" event={"ID":"02fda1fd-a80f-4025-8b13-bfdf75f8ea0a","Type":"ContainerStarted","Data":"4ac9742e764ba992924b2715014d3c8b07de89c025fbae83b0d7be010219fe81"} Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.820326 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.320312507 +0000 UTC m=+152.276514976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.833260 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" event={"ID":"2edc99da-c399-450d-b55e-ac0c5ebe16af","Type":"ContainerStarted","Data":"9fe7830e57f5d0a283858792c91dc64ff6c65134a891ab2756f0a81bd5b22e9a"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.835860 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" event={"ID":"e9e0853d-6397-42cc-9eaf-eda04f467b82","Type":"ContainerStarted","Data":"2238ea4d56c64073a0b36e95f4d6cd9a5d07d7f90a48b4462e27244d7f4d4c31"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.836479 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nt45v" podStartSLOduration=8.836468947 podStartE2EDuration="8.836468947s" podCreationTimestamp="2026-01-05 21:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.821104999 +0000 UTC m=+151.777307458" watchObservedRunningTime="2026-01-05 21:36:36.836468947 +0000 UTC m=+151.792671416" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.837368 5000 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jbnc9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.843566 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" podUID="e9e0853d-6397-42cc-9eaf-eda04f467b82" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.848379 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" event={"ID":"d750d899-ef96-457a-abb4-761a420bc277","Type":"ContainerStarted","Data":"e1323396f852b3f746a4af743cab5278ee8068ae27bb83dad0d43c3a66a1477a"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.848419 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" event={"ID":"d750d899-ef96-457a-abb4-761a420bc277","Type":"ContainerStarted","Data":"8c8e9618d77e4879a84b88d6e8b3983c2458f1f30a3314d40b172f3891dc7455"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.882073 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wsljz" event={"ID":"f3f3f16b-c9fd-4fcf-ba7d-50fd1bd91794","Type":"ContainerStarted","Data":"280f17a865738e48a91217d5ed4b64ba5cea6e6a0db6911c12cd32e1ad800e9f"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.918024 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:36 crc kubenswrapper[5000]: E0105 21:36:36.919202 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.419188125 +0000 UTC m=+152.375390594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.922591 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" event={"ID":"56c32f18-c8bd-409c-9501-164a49a93dcf","Type":"ContainerStarted","Data":"e62c9d34655e82f058e70bf9e0100a51b78d6d5e7429c978219f722021698d3c"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.945787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" event={"ID":"cf36cce9-c0d2-4250-b980-2bec1a306493","Type":"ContainerStarted","Data":"a04d399f895cc126085e78bf373f92a8f41c52e897cfc37274524e3718c91ca9"} Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.947169 5000 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fpmdv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.947209 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.950000 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.950165 5000 patch_prober.go:28] interesting pod/downloads-7954f5f757-tf7rj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.950261 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tf7rj" podUID="2245d315-61bc-4b08-8e67-ffb6f2b84674" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 05 21:36:36 crc kubenswrapper[5000]: I0105 21:36:36.984637 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" podStartSLOduration=127.984622441 podStartE2EDuration="2m7.984622441s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.905754732 +0000 UTC m=+151.861957201" watchObservedRunningTime="2026-01-05 21:36:36.984622441 +0000 UTC m=+151.940824910" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.010310 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.019518 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.019856 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.519843595 +0000 UTC m=+152.476046054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.034763 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wsljz" podStartSLOduration=9.03474809 podStartE2EDuration="9.03474809s" podCreationTimestamp="2026-01-05 21:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.034111772 +0000 UTC m=+151.990314241" watchObservedRunningTime="2026-01-05 21:36:37.03474809 +0000 UTC m=+151.990950559" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.035798 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bs84b" podStartSLOduration=128.03579306 podStartE2EDuration="2m8.03579306s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:36.98494441 +0000 UTC m=+151.941146879" watchObservedRunningTime="2026-01-05 21:36:37.03579306 +0000 UTC m=+151.991995529" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.075674 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hkznp" podStartSLOduration=128.075657186 podStartE2EDuration="2m8.075657186s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.075136261 +0000 UTC m=+152.031338730" watchObservedRunningTime="2026-01-05 21:36:37.075657186 +0000 UTC m=+152.031859655" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.122793 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.124125 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.624110137 +0000 UTC m=+152.580312596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.157149 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-z86kg" podStartSLOduration=128.157123908 podStartE2EDuration="2m8.157123908s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.13260443 +0000 UTC m=+152.088806899" watchObservedRunningTime="2026-01-05 21:36:37.157123908 +0000 UTC m=+152.113326377" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.182124 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6fm8d" podStartSLOduration=128.182107931 podStartE2EDuration="2m8.182107931s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.181194735 +0000 UTC m=+152.137397204" watchObservedRunningTime="2026-01-05 21:36:37.182107931 +0000 UTC m=+152.138310400" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.224537 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.224864 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.724851669 +0000 UTC m=+152.681054138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.257490 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" podStartSLOduration=128.257471359 podStartE2EDuration="2m8.257471359s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.212449866 +0000 UTC m=+152.168652335" watchObservedRunningTime="2026-01-05 21:36:37.257471359 +0000 UTC m=+152.213673828" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.306271 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" podStartSLOduration=128.30625214 podStartE2EDuration="2m8.30625214s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.25610683 +0000 UTC m=+152.212309309" watchObservedRunningTime="2026-01-05 21:36:37.30625214 +0000 UTC m=+152.262454599" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.326634 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.326918 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.826881298 +0000 UTC m=+152.783083767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.376733 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sc8gc" podStartSLOduration=128.376718309 podStartE2EDuration="2m8.376718309s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.361872435 +0000 UTC m=+152.318074924" watchObservedRunningTime="2026-01-05 21:36:37.376718309 +0000 UTC m=+152.332920778" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.378785 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t2pxx" podStartSLOduration=128.378776627 podStartE2EDuration="2m8.378776627s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.307166126 +0000 UTC m=+152.263368605" watchObservedRunningTime="2026-01-05 21:36:37.378776627 +0000 UTC m=+152.334979096" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.430060 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.430514 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:37.930498962 +0000 UTC m=+152.886701431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.442592 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xbzjp" podStartSLOduration=128.442572476 podStartE2EDuration="2m8.442572476s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.38553779 +0000 UTC m=+152.341740279" watchObservedRunningTime="2026-01-05 21:36:37.442572476 +0000 UTC m=+152.398774965" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.443119 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" podStartSLOduration=128.443113051 podStartE2EDuration="2m8.443113051s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:37.43744331 +0000 UTC m=+152.393645779" watchObservedRunningTime="2026-01-05 21:36:37.443113051 +0000 UTC m=+152.399315520" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.535475 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.535632 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.035607388 +0000 UTC m=+152.991809857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.535738 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.536136 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.036121193 +0000 UTC m=+152.992323672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.636275 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.636448 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.136429322 +0000 UTC m=+153.092631791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.638376 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:37 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:37 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:37 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.638433 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.656620 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-pp4rh" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.664051 5000 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.737215 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.737606 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.237591236 +0000 UTC m=+153.193793705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.838363 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.838537 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.338511993 +0000 UTC m=+153.294714462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.838669 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.838982 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.338974397 +0000 UTC m=+153.295176856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.939516 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:37 crc kubenswrapper[5000]: E0105 21:36:37.939739 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.439719229 +0000 UTC m=+153.395921718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.951040 5000 generic.go:334] "Generic (PLEG): container finished" podID="77750436-ae8c-4ab3-9647-dfd13c2822c6" containerID="fb6465f66c0cd2329f1b84db157a05851c9f56e3d7d1b965b0ee93bc05230c7c" exitCode=0 Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.951111 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" event={"ID":"77750436-ae8c-4ab3-9647-dfd13c2822c6","Type":"ContainerDied","Data":"fb6465f66c0cd2329f1b84db157a05851c9f56e3d7d1b965b0ee93bc05230c7c"} Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.953411 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" event={"ID":"125d3243-1198-4f7d-8930-d1890b5def2a","Type":"ContainerStarted","Data":"e97d71a1e9bbae340ab3bd82ec7f43abcaaca232062f9eff602671fc0af3144e"} Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.956250 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" event={"ID":"12d673e5-293f-4f9e-abe0-a92528bd45c3","Type":"ContainerStarted","Data":"5e2f62d9927089dd44ea1e245cb41298564e6c2f50d72b247aa3d5aafe57fa71"} Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.956280 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" event={"ID":"12d673e5-293f-4f9e-abe0-a92528bd45c3","Type":"ContainerStarted","Data":"13811d282d9dc9a9b47ce7c53d45b607f13917eb7fa766fbcf0049b0cf33375a"} Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.966850 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:36:37 crc kubenswrapper[5000]: I0105 21:36:37.975007 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jbnc9" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.042692 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.045240 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.545229967 +0000 UTC m=+153.501432436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.143994 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.144236 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.644201078 +0000 UTC m=+153.600403547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.144406 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.144807 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.644795215 +0000 UTC m=+153.600997734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.245015 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.245163 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.745138456 +0000 UTC m=+153.701340925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.245325 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.245612 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.745605019 +0000 UTC m=+153.701807488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.346212 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.346487 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.846466484 +0000 UTC m=+153.802668943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.447426 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: E0105 21:36:38.447792 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-05 21:36:38.947776192 +0000 UTC m=+153.903978661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w4mfk" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.496534 5000 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-05T21:36:37.66407572Z","Handler":null,"Name":""} Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.500476 5000 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.500513 5000 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.548874 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.552993 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.637298 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:38 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:38 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:38 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.637347 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.649773 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.656060 5000 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.656113 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.692651 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w4mfk\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.792956 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.793922 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.799027 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.840057 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.854254 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.854305 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.854375 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hn2h\" (UniqueName: \"kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.896497 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.918249 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.918849 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.920913 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.921085 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.940385 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.956637 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.956683 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.956720 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.956765 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.956784 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hn2h\" (UniqueName: \"kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.957599 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.965108 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.978572 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.979634 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.981579 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.983169 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" event={"ID":"12d673e5-293f-4f9e-abe0-a92528bd45c3","Type":"ContainerStarted","Data":"65820c8762f7839186dedf1a49a1ba2bd4c214281a2b73d6f666638766b3d22b"} Jan 05 21:36:38 crc kubenswrapper[5000]: I0105 21:36:38.996489 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.015509 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hn2h\" (UniqueName: \"kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h\") pod \"community-operators-blwk8\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.047945 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9vkcb" podStartSLOduration=11.04792642 podStartE2EDuration="11.04792642s" podCreationTimestamp="2026-01-05 21:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:39.043914676 +0000 UTC m=+154.000117145" watchObservedRunningTime="2026-01-05 21:36:39.04792642 +0000 UTC m=+154.004128879" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.058654 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.058727 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkfq\" (UniqueName: \"kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.058791 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.058847 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.058864 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.060713 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.083634 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.107521 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.135586 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:36:39 crc kubenswrapper[5000]: W0105 21:36:39.144633 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494f7900_b32c_47c4_8f3b_33dc5a054a7c.slice/crio-e2f692e899bb9a015c76c97726934991c62da0f43fe79662b222af9d347ca533 WatchSource:0}: Error finding container e2f692e899bb9a015c76c97726934991c62da0f43fe79662b222af9d347ca533: Status 404 returned error can't find the container with id e2f692e899bb9a015c76c97726934991c62da0f43fe79662b222af9d347ca533 Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.160325 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkfq\" (UniqueName: \"kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.160762 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.160788 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.161310 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.161565 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.181553 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.182487 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkfq\" (UniqueName: \"kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq\") pod \"certified-operators-srwl2\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.182588 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.187848 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.266259 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.266338 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.266341 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.266398 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwp2d\" (UniqueName: \"kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.289003 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.318429 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:36:39 crc kubenswrapper[5000]: W0105 21:36:39.331140 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5361e42c_4e4e_43ff_b7dc_e02436e5d46c.slice/crio-2e8d133819163864c121641a9f802e9356fdf6bf38c2ee14ae7ec3c188dc914b WatchSource:0}: Error finding container 2e8d133819163864c121641a9f802e9356fdf6bf38c2ee14ae7ec3c188dc914b: Status 404 returned error can't find the container with id 2e8d133819163864c121641a9f802e9356fdf6bf38c2ee14ae7ec3c188dc914b Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.333085 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.333430 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367610 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69xt9\" (UniqueName: \"kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9\") pod \"77750436-ae8c-4ab3-9647-dfd13c2822c6\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367690 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume\") pod \"77750436-ae8c-4ab3-9647-dfd13c2822c6\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367725 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume\") pod \"77750436-ae8c-4ab3-9647-dfd13c2822c6\" (UID: \"77750436-ae8c-4ab3-9647-dfd13c2822c6\") " Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367816 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367880 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwp2d\" (UniqueName: \"kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.367930 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.368728 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.368775 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "77750436-ae8c-4ab3-9647-dfd13c2822c6" (UID: "77750436-ae8c-4ab3-9647-dfd13c2822c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.368855 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:36:39 crc kubenswrapper[5000]: E0105 21:36:39.369120 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77750436-ae8c-4ab3-9647-dfd13c2822c6" containerName="collect-profiles" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.369145 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="77750436-ae8c-4ab3-9647-dfd13c2822c6" containerName="collect-profiles" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.369280 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="77750436-ae8c-4ab3-9647-dfd13c2822c6" containerName="collect-profiles" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.369685 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.370150 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.376325 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9" (OuterVolumeSpecName: "kube-api-access-69xt9") pod "77750436-ae8c-4ab3-9647-dfd13c2822c6" (UID: "77750436-ae8c-4ab3-9647-dfd13c2822c6"). InnerVolumeSpecName "kube-api-access-69xt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.377303 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77750436-ae8c-4ab3-9647-dfd13c2822c6" (UID: "77750436-ae8c-4ab3-9647-dfd13c2822c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.392881 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwp2d\" (UniqueName: \"kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d\") pod \"community-operators-hpdps\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.394215 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.468987 5000 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77750436-ae8c-4ab3-9647-dfd13c2822c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.469228 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77750436-ae8c-4ab3-9647-dfd13c2822c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.469239 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69xt9\" (UniqueName: \"kubernetes.io/projected/77750436-ae8c-4ab3-9647-dfd13c2822c6-kube-api-access-69xt9\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.569853 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.569976 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.570013 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zph\" (UniqueName: \"kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.579966 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.580241 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.641274 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:39 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:39 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:39 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.641581 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.688753 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zph\" (UniqueName: \"kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.688787 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.688804 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.689106 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.689390 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.689502 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.730495 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zph\" (UniqueName: \"kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph\") pod \"certified-operators-gmllz\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.854471 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:36:39 crc kubenswrapper[5000]: W0105 21:36:39.857090 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd59d28db_4d0b_49f7_88bd_fd8f82b9a14d.slice/crio-3011a81ae5d8e90dd769b0720f049ba2eef7b65c8742816fb62e8be28ef67bcf WatchSource:0}: Error finding container 3011a81ae5d8e90dd769b0720f049ba2eef7b65c8742816fb62e8be28ef67bcf: Status 404 returned error can't find the container with id 3011a81ae5d8e90dd769b0720f049ba2eef7b65c8742816fb62e8be28ef67bcf Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.873645 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dxhf" Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.988707 5000 generic.go:334] "Generic (PLEG): container finished" podID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerID="f1cd5e6d60c1a9cb54d2334a956b33afcc098a17cd359e001ee1a0a993ce0d6a" exitCode=0 Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.988924 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerDied","Data":"f1cd5e6d60c1a9cb54d2334a956b33afcc098a17cd359e001ee1a0a993ce0d6a"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.989104 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerStarted","Data":"62c2a2e8446915c54b21fc9392263d614cec1507938ae28a4319145e8a18d522"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.990521 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.991350 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerStarted","Data":"3011a81ae5d8e90dd769b0720f049ba2eef7b65c8742816fb62e8be28ef67bcf"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.993362 5000 generic.go:334] "Generic (PLEG): container finished" podID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerID="49296fd31720d85bd0ca0a1e7a6106b94bb7cc287328bc26356ceef7fb03356b" exitCode=0 Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.993413 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerDied","Data":"49296fd31720d85bd0ca0a1e7a6106b94bb7cc287328bc26356ceef7fb03356b"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.993431 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerStarted","Data":"2e8d133819163864c121641a9f802e9356fdf6bf38c2ee14ae7ec3c188dc914b"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.996192 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38854af9-531c-48b5-809e-aeb9d78e3839","Type":"ContainerStarted","Data":"4e070c09b513ba2031788c8410162cb5eb3ad654a11b234b8bba2866117bade2"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.996215 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38854af9-531c-48b5-809e-aeb9d78e3839","Type":"ContainerStarted","Data":"efb71cd0e1311dc73717c5f0a720b07cf7420070c5b5b4b6e45256d75b285c53"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.999610 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" event={"ID":"494f7900-b32c-47c4-8f3b-33dc5a054a7c","Type":"ContainerStarted","Data":"c7fa82335a1127fdf8e30c7fe3f1f5ec8f4f3fc49bbe416ffb0829694f98b1be"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.999657 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" event={"ID":"494f7900-b32c-47c4-8f3b-33dc5a054a7c","Type":"ContainerStarted","Data":"e2f692e899bb9a015c76c97726934991c62da0f43fe79662b222af9d347ca533"} Jan 05 21:36:39 crc kubenswrapper[5000]: I0105 21:36:39.999813 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.002382 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.023263 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" event={"ID":"77750436-ae8c-4ab3-9647-dfd13c2822c6","Type":"ContainerDied","Data":"52d18f64552e3ebdea8695cf39e2bd6d2738af513cfdebf7813867a0cc8c01a0"} Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.023333 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d18f64552e3ebdea8695cf39e2bd6d2738af513cfdebf7813867a0cc8c01a0" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.023572 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.026149 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.026131747 podStartE2EDuration="2.026131747s" podCreationTimestamp="2026-01-05 21:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:40.023932164 +0000 UTC m=+154.980134633" watchObservedRunningTime="2026-01-05 21:36:40.026131747 +0000 UTC m=+154.982334216" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.064283 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" podStartSLOduration=131.064269364 podStartE2EDuration="2m11.064269364s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:36:40.062785882 +0000 UTC m=+155.018988361" watchObservedRunningTime="2026-01-05 21:36:40.064269364 +0000 UTC m=+155.020471833" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.183024 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:36:40 crc kubenswrapper[5000]: W0105 21:36:40.188015 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75cdb501_de2b_46e1_9c36_02d39d4b7d48.slice/crio-088387577d6bf5e20c49a7c4564cf9b5ee5d9ef3ea87a0896681321f413d2bcb WatchSource:0}: Error finding container 088387577d6bf5e20c49a7c4564cf9b5ee5d9ef3ea87a0896681321f413d2bcb: Status 404 returned error can't find the container with id 088387577d6bf5e20c49a7c4564cf9b5ee5d9ef3ea87a0896681321f413d2bcb Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.636205 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:40 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:40 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:40 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.636556 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.713413 5000 patch_prober.go:28] interesting pod/downloads-7954f5f757-tf7rj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.713458 5000 patch_prober.go:28] interesting pod/downloads-7954f5f757-tf7rj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.713467 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tf7rj" podUID="2245d315-61bc-4b08-8e67-ffb6f2b84674" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.713507 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tf7rj" podUID="2245d315-61bc-4b08-8e67-ffb6f2b84674" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.762524 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.762580 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.769685 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.770966 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.776138 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.777639 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.788623 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.861622 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.861773 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.870603 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.903241 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btlt\" (UniqueName: \"kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.905611 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.905677 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.982776 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.982837 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.984255 5000 patch_prober.go:28] interesting pod/console-f9d7485db-7mvq2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 05 21:36:40 crc kubenswrapper[5000]: I0105 21:36:40.984287 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7mvq2" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.006634 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.006700 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.006864 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btlt\" (UniqueName: \"kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.008273 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.009283 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.055074 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btlt\" (UniqueName: \"kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt\") pod \"redhat-marketplace-wx9jq\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.056130 5000 generic.go:334] "Generic (PLEG): container finished" podID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerID="534702aad728406433a56941d04b3124b8d057ba897aa26b4c3ec4d87e7fa1d5" exitCode=0 Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.056239 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerDied","Data":"534702aad728406433a56941d04b3124b8d057ba897aa26b4c3ec4d87e7fa1d5"} Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.056286 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerStarted","Data":"088387577d6bf5e20c49a7c4564cf9b5ee5d9ef3ea87a0896681321f413d2bcb"} Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.068104 5000 generic.go:334] "Generic (PLEG): container finished" podID="38854af9-531c-48b5-809e-aeb9d78e3839" containerID="4e070c09b513ba2031788c8410162cb5eb3ad654a11b234b8bba2866117bade2" exitCode=0 Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.068960 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38854af9-531c-48b5-809e-aeb9d78e3839","Type":"ContainerDied","Data":"4e070c09b513ba2031788c8410162cb5eb3ad654a11b234b8bba2866117bade2"} Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.086254 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.121174 5000 generic.go:334] "Generic (PLEG): container finished" podID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerID="938ae7c9747bd3efa31a617a64f91ef3e0d98f35823e4b4de5a5e09c77360ae0" exitCode=0 Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.124833 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerDied","Data":"938ae7c9747bd3efa31a617a64f91ef3e0d98f35823e4b4de5a5e09c77360ae0"} Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.141255 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7djbs" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.150507 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krkd9" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.171089 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.187511 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.187612 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.322515 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.322792 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.322926 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjh9\" (UniqueName: \"kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.423739 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.423827 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjh9\" (UniqueName: \"kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.423855 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.424413 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.424663 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.460404 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjh9\" (UniqueName: \"kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9\") pod \"redhat-marketplace-rr7dx\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.538408 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.635761 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.638249 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:41 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:41 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:41 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.638309 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.855148 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.978687 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.979793 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.981629 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:36:41 crc kubenswrapper[5000]: I0105 21:36:41.991083 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.122359 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.135107 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8j4\" (UniqueName: \"kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.135143 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.135369 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.139526 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerStarted","Data":"6641a8e64346a964f9ccb9b5fccd123d991ff60977865d90a15ef82809e3ce5e"} Jan 05 21:36:42 crc kubenswrapper[5000]: W0105 21:36:42.160269 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2094ccfc_f32d_4d29_82d5_b0abeb9586eb.slice/crio-131f3cf9a70d9c6100b023ad1bab93fcb253dd4bf80f58076857671dbff39133 WatchSource:0}: Error finding container 131f3cf9a70d9c6100b023ad1bab93fcb253dd4bf80f58076857671dbff39133: Status 404 returned error can't find the container with id 131f3cf9a70d9c6100b023ad1bab93fcb253dd4bf80f58076857671dbff39133 Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.236608 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8j4\" (UniqueName: \"kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.236647 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.236710 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.237916 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.238306 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.256443 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8j4\" (UniqueName: \"kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4\") pod \"redhat-operators-bsh5l\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.312279 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.366818 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.369142 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.391200 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.439753 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.439808 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5wx8\" (UniqueName: \"kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.439828 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.454197 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.540540 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access\") pod \"38854af9-531c-48b5-809e-aeb9d78e3839\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.540634 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir\") pod \"38854af9-531c-48b5-809e-aeb9d78e3839\" (UID: \"38854af9-531c-48b5-809e-aeb9d78e3839\") " Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.540797 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38854af9-531c-48b5-809e-aeb9d78e3839" (UID: "38854af9-531c-48b5-809e-aeb9d78e3839"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.540977 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.541068 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5wx8\" (UniqueName: \"kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.541097 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.541484 5000 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38854af9-531c-48b5-809e-aeb9d78e3839-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.541943 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.542657 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.546272 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38854af9-531c-48b5-809e-aeb9d78e3839" (UID: "38854af9-531c-48b5-809e-aeb9d78e3839"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.567584 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5wx8\" (UniqueName: \"kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8\") pod \"redhat-operators-ldc6p\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.636450 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:42 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:42 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:42 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.636516 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.642660 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38854af9-531c-48b5-809e-aeb9d78e3839-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.700174 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.875112 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:36:42 crc kubenswrapper[5000]: E0105 21:36:42.876456 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38854af9-531c-48b5-809e-aeb9d78e3839" containerName="pruner" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.876473 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="38854af9-531c-48b5-809e-aeb9d78e3839" containerName="pruner" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.876658 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="38854af9-531c-48b5-809e-aeb9d78e3839" containerName="pruner" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.877378 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.882370 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.882673 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.892618 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.943641 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.946015 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:42 crc kubenswrapper[5000]: I0105 21:36:42.946136 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.046869 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.046969 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.047003 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.068691 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.155695 5000 generic.go:334] "Generic (PLEG): container finished" podID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerID="52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771" exitCode=0 Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.155805 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerDied","Data":"52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771"} Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.156013 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerStarted","Data":"131f3cf9a70d9c6100b023ad1bab93fcb253dd4bf80f58076857671dbff39133"} Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.173322 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerStarted","Data":"c929dd1a8b9511fc0af152c19b3040b478693fc14f03b177eef658c7b04ecaad"} Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.175507 5000 generic.go:334] "Generic (PLEG): container finished" podID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerID="2f4271bc2238447d45778c381a6765678a1f5f1412cc6d223974003faca85d9b" exitCode=0 Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.175549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerDied","Data":"2f4271bc2238447d45778c381a6765678a1f5f1412cc6d223974003faca85d9b"} Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.183689 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.186977 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"38854af9-531c-48b5-809e-aeb9d78e3839","Type":"ContainerDied","Data":"efb71cd0e1311dc73717c5f0a720b07cf7420070c5b5b4b6e45256d75b285c53"} Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.187006 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efb71cd0e1311dc73717c5f0a720b07cf7420070c5b5b4b6e45256d75b285c53" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.219323 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.386912 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:36:43 crc kubenswrapper[5000]: W0105 21:36:43.418123 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6155048_94d4_4319_a5fc_2c70e648d94e.slice/crio-6a760856c6528d70fffd9ba5455f3992aa7b73efb71facc08d18a4c8f00d423a WatchSource:0}: Error finding container 6a760856c6528d70fffd9ba5455f3992aa7b73efb71facc08d18a4c8f00d423a: Status 404 returned error can't find the container with id 6a760856c6528d70fffd9ba5455f3992aa7b73efb71facc08d18a4c8f00d423a Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.636815 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:43 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:43 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:43 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.636867 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:43 crc kubenswrapper[5000]: I0105 21:36:43.725192 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 05 21:36:43 crc kubenswrapper[5000]: W0105 21:36:43.795436 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0caef2b5_0bc7_4b09_836e_719605e15d47.slice/crio-e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f WatchSource:0}: Error finding container e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f: Status 404 returned error can't find the container with id e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.200674 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerStarted","Data":"6a760856c6528d70fffd9ba5455f3992aa7b73efb71facc08d18a4c8f00d423a"} Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.206329 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0caef2b5-0bc7-4b09-836e-719605e15d47","Type":"ContainerStarted","Data":"e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f"} Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.216286 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ddd1046-c918-4c58-921d-5108500a388f" containerID="012defa5570114cf9e342e75d36ce2ededfb89fe7557afc54023cc55edb7c1a0" exitCode=0 Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.216649 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerDied","Data":"012defa5570114cf9e342e75d36ce2ededfb89fe7557afc54023cc55edb7c1a0"} Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.636211 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:44 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:44 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:44 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:44 crc kubenswrapper[5000]: I0105 21:36:44.636477 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:45 crc kubenswrapper[5000]: I0105 21:36:45.226182 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0caef2b5-0bc7-4b09-836e-719605e15d47","Type":"ContainerStarted","Data":"42715aa2880c38e565f5848ff59675f4a3cf143840799d0907b63c3c8d916a64"} Jan 05 21:36:45 crc kubenswrapper[5000]: I0105 21:36:45.228107 5000 generic.go:334] "Generic (PLEG): container finished" podID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerID="80baddf1dc6d14bdf59ed1eff5b10df66cad177c7176c308919ebe57d332e18e" exitCode=0 Jan 05 21:36:45 crc kubenswrapper[5000]: I0105 21:36:45.228223 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerDied","Data":"80baddf1dc6d14bdf59ed1eff5b10df66cad177c7176c308919ebe57d332e18e"} Jan 05 21:36:45 crc kubenswrapper[5000]: I0105 21:36:45.636167 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:45 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:45 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:45 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:45 crc kubenswrapper[5000]: I0105 21:36:45.636232 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:46 crc kubenswrapper[5000]: I0105 21:36:46.236595 5000 generic.go:334] "Generic (PLEG): container finished" podID="0caef2b5-0bc7-4b09-836e-719605e15d47" containerID="42715aa2880c38e565f5848ff59675f4a3cf143840799d0907b63c3c8d916a64" exitCode=0 Jan 05 21:36:46 crc kubenswrapper[5000]: I0105 21:36:46.236651 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0caef2b5-0bc7-4b09-836e-719605e15d47","Type":"ContainerDied","Data":"42715aa2880c38e565f5848ff59675f4a3cf143840799d0907b63c3c8d916a64"} Jan 05 21:36:46 crc kubenswrapper[5000]: I0105 21:36:46.638474 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:46 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:46 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:46 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:46 crc kubenswrapper[5000]: I0105 21:36:46.638868 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:47 crc kubenswrapper[5000]: I0105 21:36:47.093127 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nt45v" Jan 05 21:36:47 crc kubenswrapper[5000]: I0105 21:36:47.635627 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:47 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:47 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:47 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:47 crc kubenswrapper[5000]: I0105 21:36:47.635696 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:48 crc kubenswrapper[5000]: I0105 21:36:48.636429 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:48 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:48 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:48 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:48 crc kubenswrapper[5000]: I0105 21:36:48.636838 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:49 crc kubenswrapper[5000]: I0105 21:36:49.636396 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:49 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:49 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:49 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:49 crc kubenswrapper[5000]: I0105 21:36:49.636492 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.096172 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.203610 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir\") pod \"0caef2b5-0bc7-4b09-836e-719605e15d47\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.203713 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access\") pod \"0caef2b5-0bc7-4b09-836e-719605e15d47\" (UID: \"0caef2b5-0bc7-4b09-836e-719605e15d47\") " Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.203961 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0caef2b5-0bc7-4b09-836e-719605e15d47" (UID: "0caef2b5-0bc7-4b09-836e-719605e15d47"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.204074 5000 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0caef2b5-0bc7-4b09-836e-719605e15d47-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.209223 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0caef2b5-0bc7-4b09-836e-719605e15d47" (UID: "0caef2b5-0bc7-4b09-836e-719605e15d47"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.268610 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0caef2b5-0bc7-4b09-836e-719605e15d47","Type":"ContainerDied","Data":"e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f"} Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.268663 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e844126d8ef90757b362012f5c8c005f0ffa885b25e7e1b89246c534d276bf0f" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.268727 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.305086 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0caef2b5-0bc7-4b09-836e-719605e15d47-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.636935 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:50 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:50 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:50 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.637029 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.731111 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tf7rj" Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.982342 5000 patch_prober.go:28] interesting pod/console-f9d7485db-7mvq2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 05 21:36:50 crc kubenswrapper[5000]: I0105 21:36:50.982400 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7mvq2" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 05 21:36:51 crc kubenswrapper[5000]: I0105 21:36:51.620937 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:51 crc kubenswrapper[5000]: I0105 21:36:51.626161 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3a4c991-8f85-4923-afb4-8cc78ceeaed8-metrics-certs\") pod \"network-metrics-daemon-gpwcw\" (UID: \"b3a4c991-8f85-4923-afb4-8cc78ceeaed8\") " pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:51 crc kubenswrapper[5000]: I0105 21:36:51.636734 5000 patch_prober.go:28] interesting pod/router-default-5444994796-fskst container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 05 21:36:51 crc kubenswrapper[5000]: [-]has-synced failed: reason withheld Jan 05 21:36:51 crc kubenswrapper[5000]: [+]process-running ok Jan 05 21:36:51 crc kubenswrapper[5000]: healthz check failed Jan 05 21:36:51 crc kubenswrapper[5000]: I0105 21:36:51.636855 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fskst" podUID="d97efce6-8e46-4981-ae4b-1d1d5b24bbf9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 05 21:36:51 crc kubenswrapper[5000]: I0105 21:36:51.682437 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gpwcw" Jan 05 21:36:52 crc kubenswrapper[5000]: I0105 21:36:52.636884 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:52 crc kubenswrapper[5000]: I0105 21:36:52.639072 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fskst" Jan 05 21:36:53 crc kubenswrapper[5000]: I0105 21:36:53.099085 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:36:53 crc kubenswrapper[5000]: I0105 21:36:53.099179 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:36:58 crc kubenswrapper[5000]: I0105 21:36:58.903691 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:37:00 crc kubenswrapper[5000]: I0105 21:37:00.986383 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:37:00 crc kubenswrapper[5000]: I0105 21:37:00.990635 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:37:01 crc kubenswrapper[5000]: E0105 21:37:01.571180 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 05 21:37:01 crc kubenswrapper[5000]: E0105 21:37:01.571338 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vkfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-srwl2_openshift-marketplace(8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:37:01 crc kubenswrapper[5000]: E0105 21:37:01.572580 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-srwl2" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" Jan 05 21:37:06 crc kubenswrapper[5000]: E0105 21:37:06.019921 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-srwl2" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" Jan 05 21:37:06 crc kubenswrapper[5000]: E0105 21:37:06.118179 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 05 21:37:06 crc kubenswrapper[5000]: E0105 21:37:06.118477 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrjh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rr7dx_openshift-marketplace(2094ccfc-f32d-4d29-82d5-b0abeb9586eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 05 21:37:06 crc kubenswrapper[5000]: E0105 21:37:06.120644 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rr7dx" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" Jan 05 21:37:09 crc kubenswrapper[5000]: E0105 21:37:09.428550 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rr7dx" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" Jan 05 21:37:09 crc kubenswrapper[5000]: I0105 21:37:09.723320 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gpwcw"] Jan 05 21:37:09 crc kubenswrapper[5000]: W0105 21:37:09.739726 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a4c991_8f85_4923_afb4_8cc78ceeaed8.slice/crio-70c90df1aa592b50f979dfc42221822aa4d325a9c8d2ea396ed41284b198d2c1 WatchSource:0}: Error finding container 70c90df1aa592b50f979dfc42221822aa4d325a9c8d2ea396ed41284b198d2c1: Status 404 returned error can't find the container with id 70c90df1aa592b50f979dfc42221822aa4d325a9c8d2ea396ed41284b198d2c1 Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.386086 5000 generic.go:334] "Generic (PLEG): container finished" podID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerID="b33597203c06e55684fb5a7e37a940aa9c57fca12ec39e9753339a718a409c8e" exitCode=0 Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.386319 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerDied","Data":"b33597203c06e55684fb5a7e37a940aa9c57fca12ec39e9753339a718a409c8e"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.389462 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" event={"ID":"b3a4c991-8f85-4923-afb4-8cc78ceeaed8","Type":"ContainerStarted","Data":"a5b47b7b24dc0af5f91942009fd201fd9eb810c202c55f9408fae33452ce51e1"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.389495 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" event={"ID":"b3a4c991-8f85-4923-afb4-8cc78ceeaed8","Type":"ContainerStarted","Data":"70c90df1aa592b50f979dfc42221822aa4d325a9c8d2ea396ed41284b198d2c1"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.396207 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerStarted","Data":"e1929e22958f702cd682744a72a286b0c83e8dae14e2a63e4f67387f239f26ce"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.401217 5000 generic.go:334] "Generic (PLEG): container finished" podID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerID="4b032082c38164e06af930a3dc1b781f6496d0174918689be6404d75ff1b79af" exitCode=0 Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.401345 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerDied","Data":"4b032082c38164e06af930a3dc1b781f6496d0174918689be6404d75ff1b79af"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.404246 5000 generic.go:334] "Generic (PLEG): container finished" podID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerID="8c408d21b704f12ddb31516d5c7344450a8fc50776a1bc8d2be27aec204223ca" exitCode=0 Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.404460 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerDied","Data":"8c408d21b704f12ddb31516d5c7344450a8fc50776a1bc8d2be27aec204223ca"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.407428 5000 generic.go:334] "Generic (PLEG): container finished" podID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerID="0583cb254e7c012493f4fa89bd24e693ab3321682a6f47734ef496d2bbd86747" exitCode=0 Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.407520 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerDied","Data":"0583cb254e7c012493f4fa89bd24e693ab3321682a6f47734ef496d2bbd86747"} Jan 05 21:37:10 crc kubenswrapper[5000]: I0105 21:37:10.411247 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerStarted","Data":"4287c9603e84e2acb240a5e56624d0547880f1a3fe53c8ef79c893e4fc992186"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.418183 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerStarted","Data":"b95064b0b81bf0cb8a04baedd0333d31787eb98805bb31706ff60bb1a8a1696c"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.420772 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerStarted","Data":"5d0f48a962674a3b94daf948074c047971cd451728b9baf45bf299f11af48cec"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.422842 5000 generic.go:334] "Generic (PLEG): container finished" podID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerID="4287c9603e84e2acb240a5e56624d0547880f1a3fe53c8ef79c893e4fc992186" exitCode=0 Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.422912 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerDied","Data":"4287c9603e84e2acb240a5e56624d0547880f1a3fe53c8ef79c893e4fc992186"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.425856 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerStarted","Data":"efd45678fd23dfee3adcb46adf726018efc7e9ba2ba3710f5aee606e16c76f93"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.427975 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gpwcw" event={"ID":"b3a4c991-8f85-4923-afb4-8cc78ceeaed8","Type":"ContainerStarted","Data":"ddcfdfc65573f404efeac2c39b2002ee0cd8936f304f91be1d854b19c77a6285"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.430477 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerStarted","Data":"438202f727efbbc9355550d3d01404edf263a62af2918ee85814cf4e724469d0"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.432425 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ddd1046-c918-4c58-921d-5108500a388f" containerID="e1929e22958f702cd682744a72a286b0c83e8dae14e2a63e4f67387f239f26ce" exitCode=0 Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.432454 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerDied","Data":"e1929e22958f702cd682744a72a286b0c83e8dae14e2a63e4f67387f239f26ce"} Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.480156 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-blwk8" podStartSLOduration=2.585463401 podStartE2EDuration="33.480134228s" podCreationTimestamp="2026-01-05 21:36:38 +0000 UTC" firstStartedPulling="2026-01-05 21:36:39.994943288 +0000 UTC m=+154.951145757" lastFinishedPulling="2026-01-05 21:37:10.889614115 +0000 UTC m=+185.845816584" observedRunningTime="2026-01-05 21:37:11.455810465 +0000 UTC m=+186.412012934" watchObservedRunningTime="2026-01-05 21:37:11.480134228 +0000 UTC m=+186.436336697" Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.524454 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gmllz" podStartSLOduration=2.657461292 podStartE2EDuration="32.524436201s" podCreationTimestamp="2026-01-05 21:36:39 +0000 UTC" firstStartedPulling="2026-01-05 21:36:41.069124121 +0000 UTC m=+156.025326590" lastFinishedPulling="2026-01-05 21:37:10.93609899 +0000 UTC m=+185.892301499" observedRunningTime="2026-01-05 21:37:11.481785335 +0000 UTC m=+186.437987824" watchObservedRunningTime="2026-01-05 21:37:11.524436201 +0000 UTC m=+186.480638670" Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.543725 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gpwcw" podStartSLOduration=162.543701511 podStartE2EDuration="2m42.543701511s" podCreationTimestamp="2026-01-05 21:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:37:11.539926013 +0000 UTC m=+186.496128482" watchObservedRunningTime="2026-01-05 21:37:11.543701511 +0000 UTC m=+186.499903980" Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.548595 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.557837 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hpdps" podStartSLOduration=2.763256269 podStartE2EDuration="32.557822323s" podCreationTimestamp="2026-01-05 21:36:39 +0000 UTC" firstStartedPulling="2026-01-05 21:36:41.126929089 +0000 UTC m=+156.083131558" lastFinishedPulling="2026-01-05 21:37:10.921495123 +0000 UTC m=+185.877697612" observedRunningTime="2026-01-05 21:37:11.555046774 +0000 UTC m=+186.511249243" watchObservedRunningTime="2026-01-05 21:37:11.557822323 +0000 UTC m=+186.514024792" Jan 05 21:37:11 crc kubenswrapper[5000]: I0105 21:37:11.613912 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wx9jq" podStartSLOduration=3.848357519 podStartE2EDuration="31.613876261s" podCreationTimestamp="2026-01-05 21:36:40 +0000 UTC" firstStartedPulling="2026-01-05 21:36:43.177436053 +0000 UTC m=+158.133638522" lastFinishedPulling="2026-01-05 21:37:10.942954785 +0000 UTC m=+185.899157264" observedRunningTime="2026-01-05 21:37:11.593602673 +0000 UTC m=+186.549805142" watchObservedRunningTime="2026-01-05 21:37:11.613876261 +0000 UTC m=+186.570078730" Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.061397 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r8mhp" Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.439611 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerStarted","Data":"1b5d28f1e19a9b58a78523ac55e79b2ef6abd20cafd57b9216f21f7b2f533332"} Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.442552 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerStarted","Data":"fb4a497ef0299d8ad767daaab135291a504f361b49ed351a3c7f2346ad0121d1"} Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.463912 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bsh5l" podStartSLOduration=4.730695852 podStartE2EDuration="31.463878063s" podCreationTimestamp="2026-01-05 21:36:41 +0000 UTC" firstStartedPulling="2026-01-05 21:36:45.229759361 +0000 UTC m=+160.185961830" lastFinishedPulling="2026-01-05 21:37:11.962941572 +0000 UTC m=+186.919144041" observedRunningTime="2026-01-05 21:37:12.463034679 +0000 UTC m=+187.419237168" watchObservedRunningTime="2026-01-05 21:37:12.463878063 +0000 UTC m=+187.420080542" Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.479422 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ldc6p" podStartSLOduration=3.640323575 podStartE2EDuration="30.479404395s" podCreationTimestamp="2026-01-05 21:36:42 +0000 UTC" firstStartedPulling="2026-01-05 21:36:45.230386539 +0000 UTC m=+160.186589008" lastFinishedPulling="2026-01-05 21:37:12.069467349 +0000 UTC m=+187.025669828" observedRunningTime="2026-01-05 21:37:12.477004737 +0000 UTC m=+187.433207216" watchObservedRunningTime="2026-01-05 21:37:12.479404395 +0000 UTC m=+187.435606864" Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.701488 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:12 crc kubenswrapper[5000]: I0105 21:37:12.701534 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:13 crc kubenswrapper[5000]: I0105 21:37:13.773925 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldc6p" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="registry-server" probeResult="failure" output=< Jan 05 21:37:13 crc kubenswrapper[5000]: timeout: failed to connect service ":50051" within 1s Jan 05 21:37:13 crc kubenswrapper[5000]: > Jan 05 21:37:14 crc kubenswrapper[5000]: I0105 21:37:14.470107 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.108559 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.109380 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.192180 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.499139 5000 generic.go:334] "Generic (PLEG): container finished" podID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerID="32a3ac5e5f62e943213dbd1b8db1a8c5612797f483ce80d623ba3f27142119cf" exitCode=0 Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.499224 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerDied","Data":"32a3ac5e5f62e943213dbd1b8db1a8c5612797f483ce80d623ba3f27142119cf"} Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.536152 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.581270 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.581316 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:19 crc kubenswrapper[5000]: I0105 21:37:19.613765 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.003362 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.003422 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.057161 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.505579 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerStarted","Data":"9b15de792bbc6eefd87d8aa91f69b1002f2c3c602860e8ba8bcf6eef2c889bb7"} Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.527094 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-srwl2" podStartSLOduration=2.571903626 podStartE2EDuration="42.527079446s" podCreationTimestamp="2026-01-05 21:36:38 +0000 UTC" firstStartedPulling="2026-01-05 21:36:39.990287445 +0000 UTC m=+154.946489914" lastFinishedPulling="2026-01-05 21:37:19.945463265 +0000 UTC m=+194.901665734" observedRunningTime="2026-01-05 21:37:20.52441256 +0000 UTC m=+195.480615029" watchObservedRunningTime="2026-01-05 21:37:20.527079446 +0000 UTC m=+195.483281915" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.564388 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:20 crc kubenswrapper[5000]: I0105 21:37:20.564974 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.087584 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.088232 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.131885 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.557766 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.869073 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:37:21 crc kubenswrapper[5000]: E0105 21:37:21.869514 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0caef2b5-0bc7-4b09-836e-719605e15d47" containerName="pruner" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.869525 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0caef2b5-0bc7-4b09-836e-719605e15d47" containerName="pruner" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.869631 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0caef2b5-0bc7-4b09-836e-719605e15d47" containerName="pruner" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.870045 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.872949 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.873105 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.877515 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.953760 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:21 crc kubenswrapper[5000]: I0105 21:37:21.953813 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.056047 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.056130 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.056172 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.081348 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.193450 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.313321 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.313871 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.359329 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.362886 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.422011 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.514608 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e32b08a-7136-49f4-b5ef-e1630f4eb95c","Type":"ContainerStarted","Data":"ee753970834cbcd7d844149775009b86eb2f1288a76b4640be6cb791dc43ca8c"} Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.515196 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gmllz" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="registry-server" containerID="cri-o://efd45678fd23dfee3adcb46adf726018efc7e9ba2ba3710f5aee606e16c76f93" gracePeriod=2 Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.622279 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.623110 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.623348 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hpdps" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="registry-server" containerID="cri-o://438202f727efbbc9355550d3d01404edf263a62af2918ee85814cf4e724469d0" gracePeriod=2 Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.746058 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:22 crc kubenswrapper[5000]: I0105 21:37:22.807432 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:23 crc kubenswrapper[5000]: I0105 21:37:23.099386 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:37:23 crc kubenswrapper[5000]: I0105 21:37:23.099445 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:37:23 crc kubenswrapper[5000]: I0105 21:37:23.521752 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e32b08a-7136-49f4-b5ef-e1630f4eb95c","Type":"ContainerStarted","Data":"e8633995254ac7e0c3b1cf4ae406917d4229bb65eaf179cca01b7b1609bb2e2e"} Jan 05 21:37:24 crc kubenswrapper[5000]: I0105 21:37:24.529349 5000 generic.go:334] "Generic (PLEG): container finished" podID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerID="efd45678fd23dfee3adcb46adf726018efc7e9ba2ba3710f5aee606e16c76f93" exitCode=0 Jan 05 21:37:24 crc kubenswrapper[5000]: I0105 21:37:24.529396 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerDied","Data":"efd45678fd23dfee3adcb46adf726018efc7e9ba2ba3710f5aee606e16c76f93"} Jan 05 21:37:24 crc kubenswrapper[5000]: I0105 21:37:24.531806 5000 generic.go:334] "Generic (PLEG): container finished" podID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerID="438202f727efbbc9355550d3d01404edf263a62af2918ee85814cf4e724469d0" exitCode=0 Jan 05 21:37:24 crc kubenswrapper[5000]: I0105 21:37:24.532041 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerDied","Data":"438202f727efbbc9355550d3d01404edf263a62af2918ee85814cf4e724469d0"} Jan 05 21:37:24 crc kubenswrapper[5000]: I0105 21:37:24.551529 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.551502184 podStartE2EDuration="3.551502184s" podCreationTimestamp="2026-01-05 21:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:37:24.547866726 +0000 UTC m=+199.504069225" watchObservedRunningTime="2026-01-05 21:37:24.551502184 +0000 UTC m=+199.507704653" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.024999 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.025793 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ldc6p" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="registry-server" containerID="cri-o://fb4a497ef0299d8ad767daaab135291a504f361b49ed351a3c7f2346ad0121d1" gracePeriod=2 Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.095167 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.099487 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226505 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwp2d\" (UniqueName: \"kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d\") pod \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226555 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9zph\" (UniqueName: \"kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph\") pod \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226580 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content\") pod \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226619 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities\") pod \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\" (UID: \"75cdb501-de2b-46e1-9c36-02d39d4b7d48\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226637 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities\") pod \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.226662 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content\") pod \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\" (UID: \"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d\") " Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.227621 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities" (OuterVolumeSpecName: "utilities") pod "75cdb501-de2b-46e1-9c36-02d39d4b7d48" (UID: "75cdb501-de2b-46e1-9c36-02d39d4b7d48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.228900 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities" (OuterVolumeSpecName: "utilities") pod "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" (UID: "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.236094 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d" (OuterVolumeSpecName: "kube-api-access-nwp2d") pod "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" (UID: "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d"). InnerVolumeSpecName "kube-api-access-nwp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.236149 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph" (OuterVolumeSpecName: "kube-api-access-s9zph") pod "75cdb501-de2b-46e1-9c36-02d39d4b7d48" (UID: "75cdb501-de2b-46e1-9c36-02d39d4b7d48"). InnerVolumeSpecName "kube-api-access-s9zph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.281090 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" (UID: "d59d28db-4d0b-49f7-88bd-fd8f82b9a14d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.285332 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75cdb501-de2b-46e1-9c36-02d39d4b7d48" (UID: "75cdb501-de2b-46e1-9c36-02d39d4b7d48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328377 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwp2d\" (UniqueName: \"kubernetes.io/projected/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-kube-api-access-nwp2d\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328434 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9zph\" (UniqueName: \"kubernetes.io/projected/75cdb501-de2b-46e1-9c36-02d39d4b7d48-kube-api-access-s9zph\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328447 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328458 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cdb501-de2b-46e1-9c36-02d39d4b7d48-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328512 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.328523 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.537884 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpdps" event={"ID":"d59d28db-4d0b-49f7-88bd-fd8f82b9a14d","Type":"ContainerDied","Data":"3011a81ae5d8e90dd769b0720f049ba2eef7b65c8742816fb62e8be28ef67bcf"} Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.537926 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpdps" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.537963 5000 scope.go:117] "RemoveContainer" containerID="438202f727efbbc9355550d3d01404edf263a62af2918ee85814cf4e724469d0" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.539997 5000 generic.go:334] "Generic (PLEG): container finished" podID="1e32b08a-7136-49f4-b5ef-e1630f4eb95c" containerID="e8633995254ac7e0c3b1cf4ae406917d4229bb65eaf179cca01b7b1609bb2e2e" exitCode=0 Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.540049 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e32b08a-7136-49f4-b5ef-e1630f4eb95c","Type":"ContainerDied","Data":"e8633995254ac7e0c3b1cf4ae406917d4229bb65eaf179cca01b7b1609bb2e2e"} Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.543262 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gmllz" event={"ID":"75cdb501-de2b-46e1-9c36-02d39d4b7d48","Type":"ContainerDied","Data":"088387577d6bf5e20c49a7c4564cf9b5ee5d9ef3ea87a0896681321f413d2bcb"} Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.543307 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gmllz" Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.576034 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.580979 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hpdps"] Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.586169 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:37:25 crc kubenswrapper[5000]: I0105 21:37:25.589538 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gmllz"] Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.491257 5000 scope.go:117] "RemoveContainer" containerID="4b032082c38164e06af930a3dc1b781f6496d0174918689be6404d75ff1b79af" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.551531 5000 generic.go:334] "Generic (PLEG): container finished" podID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerID="fb4a497ef0299d8ad767daaab135291a504f361b49ed351a3c7f2346ad0121d1" exitCode=0 Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.551579 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerDied","Data":"fb4a497ef0299d8ad767daaab135291a504f361b49ed351a3c7f2346ad0121d1"} Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.737917 5000 scope.go:117] "RemoveContainer" containerID="938ae7c9747bd3efa31a617a64f91ef3e0d98f35823e4b4de5a5e09c77360ae0" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.778103 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.850088 5000 scope.go:117] "RemoveContainer" containerID="efd45678fd23dfee3adcb46adf726018efc7e9ba2ba3710f5aee606e16c76f93" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.881147 5000 scope.go:117] "RemoveContainer" containerID="b33597203c06e55684fb5a7e37a940aa9c57fca12ec39e9753339a718a409c8e" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.882414 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.911449 5000 scope.go:117] "RemoveContainer" containerID="534702aad728406433a56941d04b3124b8d057ba897aa26b4c3ec4d87e7fa1d5" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.947549 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir\") pod \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.947673 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access\") pod \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\" (UID: \"1e32b08a-7136-49f4-b5ef-e1630f4eb95c\") " Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.949330 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1e32b08a-7136-49f4-b5ef-e1630f4eb95c" (UID: "1e32b08a-7136-49f4-b5ef-e1630f4eb95c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:37:26 crc kubenswrapper[5000]: I0105 21:37:26.953531 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1e32b08a-7136-49f4-b5ef-e1630f4eb95c" (UID: "1e32b08a-7136-49f4-b5ef-e1630f4eb95c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.049303 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content\") pod \"b6155048-94d4-4319-a5fc-2c70e648d94e\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.049399 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities\") pod \"b6155048-94d4-4319-a5fc-2c70e648d94e\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.049482 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5wx8\" (UniqueName: \"kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8\") pod \"b6155048-94d4-4319-a5fc-2c70e648d94e\" (UID: \"b6155048-94d4-4319-a5fc-2c70e648d94e\") " Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.049839 5000 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.049865 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e32b08a-7136-49f4-b5ef-e1630f4eb95c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.050094 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities" (OuterVolumeSpecName: "utilities") pod "b6155048-94d4-4319-a5fc-2c70e648d94e" (UID: "b6155048-94d4-4319-a5fc-2c70e648d94e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.053288 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8" (OuterVolumeSpecName: "kube-api-access-k5wx8") pod "b6155048-94d4-4319-a5fc-2c70e648d94e" (UID: "b6155048-94d4-4319-a5fc-2c70e648d94e"). InnerVolumeSpecName "kube-api-access-k5wx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.155143 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5wx8\" (UniqueName: \"kubernetes.io/projected/b6155048-94d4-4319-a5fc-2c70e648d94e-kube-api-access-k5wx8\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.155217 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.180903 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6155048-94d4-4319-a5fc-2c70e648d94e" (UID: "b6155048-94d4-4319-a5fc-2c70e648d94e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.256197 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6155048-94d4-4319-a5fc-2c70e648d94e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.330092 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" path="/var/lib/kubelet/pods/75cdb501-de2b-46e1-9c36-02d39d4b7d48/volumes" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.330670 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" path="/var/lib/kubelet/pods/d59d28db-4d0b-49f7-88bd-fd8f82b9a14d/volumes" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.560099 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.560293 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e32b08a-7136-49f4-b5ef-e1630f4eb95c","Type":"ContainerDied","Data":"ee753970834cbcd7d844149775009b86eb2f1288a76b4640be6cb791dc43ca8c"} Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.560681 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee753970834cbcd7d844149775009b86eb2f1288a76b4640be6cb791dc43ca8c" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.562715 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldc6p" event={"ID":"b6155048-94d4-4319-a5fc-2c70e648d94e","Type":"ContainerDied","Data":"6a760856c6528d70fffd9ba5455f3992aa7b73efb71facc08d18a4c8f00d423a"} Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.562756 5000 scope.go:117] "RemoveContainer" containerID="fb4a497ef0299d8ad767daaab135291a504f361b49ed351a3c7f2346ad0121d1" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.562820 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldc6p" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.567035 5000 generic.go:334] "Generic (PLEG): container finished" podID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerID="30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85" exitCode=0 Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.567065 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerDied","Data":"30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85"} Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.575707 5000 scope.go:117] "RemoveContainer" containerID="4287c9603e84e2acb240a5e56624d0547880f1a3fe53c8ef79c893e4fc992186" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.639258 5000 scope.go:117] "RemoveContainer" containerID="80baddf1dc6d14bdf59ed1eff5b10df66cad177c7176c308919ebe57d332e18e" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.653259 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.656959 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ldc6p"] Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.863605 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864088 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864182 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864269 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864333 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864406 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864488 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864569 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e32b08a-7136-49f4-b5ef-e1630f4eb95c" containerName="pruner" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864644 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e32b08a-7136-49f4-b5ef-e1630f4eb95c" containerName="pruner" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864727 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864803 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.864902 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.864983 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.865065 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.865149 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="extract-content" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.865230 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.865355 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.865442 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.865513 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="extract-utilities" Jan 05 21:37:27 crc kubenswrapper[5000]: E0105 21:37:27.865594 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.865678 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.865906 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e32b08a-7136-49f4-b5ef-e1630f4eb95c" containerName="pruner" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.866015 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59d28db-4d0b-49f7-88bd-fd8f82b9a14d" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.866105 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cdb501-de2b-46e1-9c36-02d39d4b7d48" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.866192 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" containerName="registry-server" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.866760 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.869663 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.869920 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 05 21:37:27 crc kubenswrapper[5000]: I0105 21:37:27.886121 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.063986 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.064049 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.064103 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.165103 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.165215 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.165249 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.165317 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.165319 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.189568 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access\") pod \"installer-9-crc\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.208407 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.388231 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 05 21:37:28 crc kubenswrapper[5000]: W0105 21:37:28.400072 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2199f70b_6ba2_4e30_8e73_7eb7fc512d37.slice/crio-de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7 WatchSource:0}: Error finding container de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7: Status 404 returned error can't find the container with id de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7 Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.574864 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerStarted","Data":"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015"} Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.577954 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2199f70b-6ba2-4e30-8e73-7eb7fc512d37","Type":"ContainerStarted","Data":"de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7"} Jan 05 21:37:28 crc kubenswrapper[5000]: I0105 21:37:28.591293 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rr7dx" podStartSLOduration=2.763105088 podStartE2EDuration="47.591277679s" podCreationTimestamp="2026-01-05 21:36:41 +0000 UTC" firstStartedPulling="2026-01-05 21:36:43.160584893 +0000 UTC m=+158.116787372" lastFinishedPulling="2026-01-05 21:37:27.988757494 +0000 UTC m=+202.944959963" observedRunningTime="2026-01-05 21:37:28.589979311 +0000 UTC m=+203.546181800" watchObservedRunningTime="2026-01-05 21:37:28.591277679 +0000 UTC m=+203.547480148" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.333754 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6155048-94d4-4319-a5fc-2c70e648d94e" path="/var/lib/kubelet/pods/b6155048-94d4-4319-a5fc-2c70e648d94e/volumes" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.334726 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.334757 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.371950 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.592260 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2199f70b-6ba2-4e30-8e73-7eb7fc512d37","Type":"ContainerStarted","Data":"d928076cc906ca2314abde6d7371ee9718ca5282714586f574220df15f99f232"} Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.612179 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.612164115 podStartE2EDuration="2.612164115s" podCreationTimestamp="2026-01-05 21:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:37:29.609775265 +0000 UTC m=+204.565977734" watchObservedRunningTime="2026-01-05 21:37:29.612164115 +0000 UTC m=+204.568366584" Jan 05 21:37:29 crc kubenswrapper[5000]: I0105 21:37:29.634123 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:37:31 crc kubenswrapper[5000]: I0105 21:37:31.540457 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:31 crc kubenswrapper[5000]: I0105 21:37:31.540785 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:31 crc kubenswrapper[5000]: I0105 21:37:31.586551 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.507824 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" podUID="d7313182-9b06-475a-a504-e5207fc2f330" containerName="oauth-openshift" containerID="cri-o://a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4" gracePeriod=15 Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.925033 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.957201 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24"] Jan 05 21:37:39 crc kubenswrapper[5000]: E0105 21:37:39.957419 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7313182-9b06-475a-a504-e5207fc2f330" containerName="oauth-openshift" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.957431 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7313182-9b06-475a-a504-e5207fc2f330" containerName="oauth-openshift" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.957520 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7313182-9b06-475a-a504-e5207fc2f330" containerName="oauth-openshift" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.959949 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:39 crc kubenswrapper[5000]: I0105 21:37:39.969845 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24"] Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.012974 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013047 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013081 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013116 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013137 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013231 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013313 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013368 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013452 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p79\" (UniqueName: \"kubernetes.io/projected/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-kube-api-access-m7p79\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013505 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013590 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013631 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013660 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.013692 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.114740 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115065 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115183 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115283 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115385 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115584 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115736 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115821 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115848 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.115935 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116037 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116144 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p79nc\" (UniqueName: \"kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116251 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116269 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116366 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116478 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116666 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116737 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data\") pod \"d7313182-9b06-475a-a504-e5207fc2f330\" (UID: \"d7313182-9b06-475a-a504-e5207fc2f330\") " Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.116877 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.117278 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.117514 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.117659 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.117872 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118052 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118164 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118337 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p79\" (UniqueName: \"kubernetes.io/projected/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-kube-api-access-m7p79\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118485 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118598 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118707 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.118887 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.119090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.119226 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.119269 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.123298 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.123727 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.124288 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.124987 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.125532 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.125703 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.126087 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.126251 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.126429 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127011 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127049 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127065 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127080 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127093 5000 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7313182-9b06-475a-a504-e5207fc2f330-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.127105 5000 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7313182-9b06-475a-a504-e5207fc2f330-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.128814 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.128924 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.129911 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc" (OuterVolumeSpecName: "kube-api-access-p79nc") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "kube-api-access-p79nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.130555 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.130747 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.131196 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.131779 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.132055 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.132215 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.134333 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.136642 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.136726 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.136815 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.139095 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.141175 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d7313182-9b06-475a-a504-e5207fc2f330" (UID: "d7313182-9b06-475a-a504-e5207fc2f330"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.145527 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p79\" (UniqueName: \"kubernetes.io/projected/aee87e02-8ab5-40e7-9667-e5f12c9d92bf-kube-api-access-m7p79\") pod \"oauth-openshift-5d4b6f47b4-7gh24\" (UID: \"aee87e02-8ab5-40e7-9667-e5f12c9d92bf\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228197 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228239 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p79nc\" (UniqueName: \"kubernetes.io/projected/d7313182-9b06-475a-a504-e5207fc2f330-kube-api-access-p79nc\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228253 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228265 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228279 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.228293 5000 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d7313182-9b06-475a-a504-e5207fc2f330-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.283307 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.660825 5000 generic.go:334] "Generic (PLEG): container finished" podID="d7313182-9b06-475a-a504-e5207fc2f330" containerID="a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4" exitCode=0 Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.660857 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.660873 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" event={"ID":"d7313182-9b06-475a-a504-e5207fc2f330","Type":"ContainerDied","Data":"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4"} Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.661322 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jg6l" event={"ID":"d7313182-9b06-475a-a504-e5207fc2f330","Type":"ContainerDied","Data":"eab596c296d3a81a38850bd1ca76881dbd436164a2401d5ff338965af0db7fee"} Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.661359 5000 scope.go:117] "RemoveContainer" containerID="a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.670295 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24"] Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.688478 5000 scope.go:117] "RemoveContainer" containerID="a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4" Jan 05 21:37:40 crc kubenswrapper[5000]: E0105 21:37:40.700689 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4\": container with ID starting with a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4 not found: ID does not exist" containerID="a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.700773 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4"} err="failed to get container status \"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4\": rpc error: code = NotFound desc = could not find container \"a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4\": container with ID starting with a0a0d8269bd63da4ad4177b0e753e75c52f444e4c3dd671709e8b812a5ef10b4 not found: ID does not exist" Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.717008 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:37:40 crc kubenswrapper[5000]: I0105 21:37:40.720535 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jg6l"] Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.331974 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7313182-9b06-475a-a504-e5207fc2f330" path="/var/lib/kubelet/pods/d7313182-9b06-475a-a504-e5207fc2f330/volumes" Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.581141 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.619039 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.668473 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" event={"ID":"aee87e02-8ab5-40e7-9667-e5f12c9d92bf","Type":"ContainerStarted","Data":"de4ea034506875146b1df8e7132b129a7545faef7c5ed8a42c4f5d608e61f6fd"} Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.668513 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" event={"ID":"aee87e02-8ab5-40e7-9667-e5f12c9d92bf","Type":"ContainerStarted","Data":"bc12e54cb092fc64be2ce11250c640f51a5162d246d3290d72dc0466bc7e2b91"} Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.668541 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rr7dx" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="registry-server" containerID="cri-o://ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015" gracePeriod=2 Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.668802 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.685851 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" Jan 05 21:37:41 crc kubenswrapper[5000]: I0105 21:37:41.718205 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-7gh24" podStartSLOduration=27.718186327 podStartE2EDuration="27.718186327s" podCreationTimestamp="2026-01-05 21:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:37:41.715441566 +0000 UTC m=+216.671644055" watchObservedRunningTime="2026-01-05 21:37:41.718186327 +0000 UTC m=+216.674388796" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.145127 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.254700 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities\") pod \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.254770 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjh9\" (UniqueName: \"kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9\") pod \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.254822 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content\") pod \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\" (UID: \"2094ccfc-f32d-4d29-82d5-b0abeb9586eb\") " Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.255697 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities" (OuterVolumeSpecName: "utilities") pod "2094ccfc-f32d-4d29-82d5-b0abeb9586eb" (UID: "2094ccfc-f32d-4d29-82d5-b0abeb9586eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.259376 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9" (OuterVolumeSpecName: "kube-api-access-wrjh9") pod "2094ccfc-f32d-4d29-82d5-b0abeb9586eb" (UID: "2094ccfc-f32d-4d29-82d5-b0abeb9586eb"). InnerVolumeSpecName "kube-api-access-wrjh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.294840 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2094ccfc-f32d-4d29-82d5-b0abeb9586eb" (UID: "2094ccfc-f32d-4d29-82d5-b0abeb9586eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.356016 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.356059 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjh9\" (UniqueName: \"kubernetes.io/projected/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-kube-api-access-wrjh9\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.356073 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2094ccfc-f32d-4d29-82d5-b0abeb9586eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.677001 5000 generic.go:334] "Generic (PLEG): container finished" podID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerID="ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015" exitCode=0 Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.677063 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rr7dx" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.677069 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerDied","Data":"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015"} Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.677141 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rr7dx" event={"ID":"2094ccfc-f32d-4d29-82d5-b0abeb9586eb","Type":"ContainerDied","Data":"131f3cf9a70d9c6100b023ad1bab93fcb253dd4bf80f58076857671dbff39133"} Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.677174 5000 scope.go:117] "RemoveContainer" containerID="ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.696577 5000 scope.go:117] "RemoveContainer" containerID="30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.710562 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.713242 5000 scope.go:117] "RemoveContainer" containerID="52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.723358 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rr7dx"] Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.731964 5000 scope.go:117] "RemoveContainer" containerID="ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015" Jan 05 21:37:42 crc kubenswrapper[5000]: E0105 21:37:42.732419 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015\": container with ID starting with ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015 not found: ID does not exist" containerID="ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.732460 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015"} err="failed to get container status \"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015\": rpc error: code = NotFound desc = could not find container \"ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015\": container with ID starting with ba50db571f3fc5e15a3e789346ca090615c6d42a8112e46e57f58b40ddcf1015 not found: ID does not exist" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.732486 5000 scope.go:117] "RemoveContainer" containerID="30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85" Jan 05 21:37:42 crc kubenswrapper[5000]: E0105 21:37:42.732933 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85\": container with ID starting with 30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85 not found: ID does not exist" containerID="30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.732958 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85"} err="failed to get container status \"30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85\": rpc error: code = NotFound desc = could not find container \"30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85\": container with ID starting with 30465b20aab135efdb42e9e7616e9b7d3664a3fea4c35ed2af970598755f2a85 not found: ID does not exist" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.732977 5000 scope.go:117] "RemoveContainer" containerID="52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771" Jan 05 21:37:42 crc kubenswrapper[5000]: E0105 21:37:42.733386 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771\": container with ID starting with 52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771 not found: ID does not exist" containerID="52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771" Jan 05 21:37:42 crc kubenswrapper[5000]: I0105 21:37:42.733413 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771"} err="failed to get container status \"52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771\": rpc error: code = NotFound desc = could not find container \"52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771\": container with ID starting with 52237f78a2f17ee61c700b48d1dec26a32791b10a55002ad0e473c8893826771 not found: ID does not exist" Jan 05 21:37:43 crc kubenswrapper[5000]: I0105 21:37:43.332124 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" path="/var/lib/kubelet/pods/2094ccfc-f32d-4d29-82d5-b0abeb9586eb/volumes" Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.099047 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.099769 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.099828 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.100634 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.100740 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222" gracePeriod=600 Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.752317 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222" exitCode=0 Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.752724 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222"} Jan 05 21:37:53 crc kubenswrapper[5000]: I0105 21:37:53.752750 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.389040 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.389803 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-srwl2" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="registry-server" containerID="cri-o://9b15de792bbc6eefd87d8aa91f69b1002f2c3c602860e8ba8bcf6eef2c889bb7" gracePeriod=30 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.405703 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.405942 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-blwk8" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="registry-server" containerID="cri-o://b95064b0b81bf0cb8a04baedd0333d31787eb98805bb31706ff60bb1a8a1696c" gracePeriod=30 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.410570 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.410912 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" containerID="cri-o://2ccbdb54d59ba15bd05e8e24636a470255c914a0a902e132f1cb888f6ed9ffb6" gracePeriod=30 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.415952 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.416313 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wx9jq" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="registry-server" containerID="cri-o://5d0f48a962674a3b94daf948074c047971cd451728b9baf45bf299f11af48cec" gracePeriod=30 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.420300 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.420596 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bsh5l" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="registry-server" containerID="cri-o://1b5d28f1e19a9b58a78523ac55e79b2ef6abd20cafd57b9216f21f7b2f533332" gracePeriod=30 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.436517 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8trn"] Jan 05 21:38:02 crc kubenswrapper[5000]: E0105 21:38:02.436743 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="registry-server" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.436755 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="registry-server" Jan 05 21:38:02 crc kubenswrapper[5000]: E0105 21:38:02.436776 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="extract-content" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.436784 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="extract-content" Jan 05 21:38:02 crc kubenswrapper[5000]: E0105 21:38:02.436801 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="extract-utilities" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.436810 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="extract-utilities" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.436958 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2094ccfc-f32d-4d29-82d5-b0abeb9586eb" containerName="registry-server" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.437625 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.439802 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8trn"] Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.604832 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.604925 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.604952 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmgd\" (UniqueName: \"kubernetes.io/projected/28f7248c-0908-4c50-8c47-14d96f5c8665-kube-api-access-npmgd\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.706058 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.706111 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmgd\" (UniqueName: \"kubernetes.io/projected/28f7248c-0908-4c50-8c47-14d96f5c8665-kube-api-access-npmgd\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.706169 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.707730 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.720488 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/28f7248c-0908-4c50-8c47-14d96f5c8665-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.723920 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmgd\" (UniqueName: \"kubernetes.io/projected/28f7248c-0908-4c50-8c47-14d96f5c8665-kube-api-access-npmgd\") pod \"marketplace-operator-79b997595-d8trn\" (UID: \"28f7248c-0908-4c50-8c47-14d96f5c8665\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.765532 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.809026 5000 generic.go:334] "Generic (PLEG): container finished" podID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerID="5d0f48a962674a3b94daf948074c047971cd451728b9baf45bf299f11af48cec" exitCode=0 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.809185 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerDied","Data":"5d0f48a962674a3b94daf948074c047971cd451728b9baf45bf299f11af48cec"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.811499 5000 generic.go:334] "Generic (PLEG): container finished" podID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerID="9b15de792bbc6eefd87d8aa91f69b1002f2c3c602860e8ba8bcf6eef2c889bb7" exitCode=0 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.811554 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerDied","Data":"9b15de792bbc6eefd87d8aa91f69b1002f2c3c602860e8ba8bcf6eef2c889bb7"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.811577 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-srwl2" event={"ID":"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3","Type":"ContainerDied","Data":"62c2a2e8446915c54b21fc9392263d614cec1507938ae28a4319145e8a18d522"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.811594 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c2a2e8446915c54b21fc9392263d614cec1507938ae28a4319145e8a18d522" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.814035 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ddd1046-c918-4c58-921d-5108500a388f" containerID="1b5d28f1e19a9b58a78523ac55e79b2ef6abd20cafd57b9216f21f7b2f533332" exitCode=0 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.814117 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerDied","Data":"1b5d28f1e19a9b58a78523ac55e79b2ef6abd20cafd57b9216f21f7b2f533332"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.816562 5000 generic.go:334] "Generic (PLEG): container finished" podID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerID="2ccbdb54d59ba15bd05e8e24636a470255c914a0a902e132f1cb888f6ed9ffb6" exitCode=0 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.816619 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" event={"ID":"7f1846c9-70fd-44b0-8ea0-f0d67a308185","Type":"ContainerDied","Data":"2ccbdb54d59ba15bd05e8e24636a470255c914a0a902e132f1cb888f6ed9ffb6"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.820558 5000 generic.go:334] "Generic (PLEG): container finished" podID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerID="b95064b0b81bf0cb8a04baedd0333d31787eb98805bb31706ff60bb1a8a1696c" exitCode=0 Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.820599 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerDied","Data":"b95064b0b81bf0cb8a04baedd0333d31787eb98805bb31706ff60bb1a8a1696c"} Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.864926 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.873458 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.891745 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.892812 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:38:02 crc kubenswrapper[5000]: I0105 21:38:02.905735 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011531 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics\") pod \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011603 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh8j4\" (UniqueName: \"kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4\") pod \"6ddd1046-c918-4c58-921d-5108500a388f\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011641 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content\") pod \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011670 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities\") pod \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011702 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdvvm\" (UniqueName: \"kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm\") pod \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011732 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities\") pod \"6ddd1046-c918-4c58-921d-5108500a388f\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011754 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca\") pod \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\" (UID: \"7f1846c9-70fd-44b0-8ea0-f0d67a308185\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011775 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hn2h\" (UniqueName: \"kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h\") pod \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\" (UID: \"5361e42c-4e4e-43ff-b7dc-e02436e5d46c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011795 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content\") pod \"6ddd1046-c918-4c58-921d-5108500a388f\" (UID: \"6ddd1046-c918-4c58-921d-5108500a388f\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011819 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities\") pod \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011840 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7btlt\" (UniqueName: \"kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt\") pod \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011865 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content\") pod \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\" (UID: \"74fefb64-8607-40f3-aeb6-b4578ed8d91c\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011906 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vkfq\" (UniqueName: \"kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq\") pod \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011931 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content\") pod \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.011951 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities\") pod \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\" (UID: \"8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3\") " Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.012903 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities" (OuterVolumeSpecName: "utilities") pod "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" (UID: "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.014541 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities" (OuterVolumeSpecName: "utilities") pod "5361e42c-4e4e-43ff-b7dc-e02436e5d46c" (UID: "5361e42c-4e4e-43ff-b7dc-e02436e5d46c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.014572 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities" (OuterVolumeSpecName: "utilities") pod "6ddd1046-c918-4c58-921d-5108500a388f" (UID: "6ddd1046-c918-4c58-921d-5108500a388f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.015232 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7f1846c9-70fd-44b0-8ea0-f0d67a308185" (UID: "7f1846c9-70fd-44b0-8ea0-f0d67a308185"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.016250 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7f1846c9-70fd-44b0-8ea0-f0d67a308185" (UID: "7f1846c9-70fd-44b0-8ea0-f0d67a308185"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.016279 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities" (OuterVolumeSpecName: "utilities") pod "74fefb64-8607-40f3-aeb6-b4578ed8d91c" (UID: "74fefb64-8607-40f3-aeb6-b4578ed8d91c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.016593 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm" (OuterVolumeSpecName: "kube-api-access-mdvvm") pod "7f1846c9-70fd-44b0-8ea0-f0d67a308185" (UID: "7f1846c9-70fd-44b0-8ea0-f0d67a308185"). InnerVolumeSpecName "kube-api-access-mdvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.017493 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq" (OuterVolumeSpecName: "kube-api-access-9vkfq") pod "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" (UID: "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3"). InnerVolumeSpecName "kube-api-access-9vkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.017525 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h" (OuterVolumeSpecName: "kube-api-access-6hn2h") pod "5361e42c-4e4e-43ff-b7dc-e02436e5d46c" (UID: "5361e42c-4e4e-43ff-b7dc-e02436e5d46c"). InnerVolumeSpecName "kube-api-access-6hn2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.018655 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt" (OuterVolumeSpecName: "kube-api-access-7btlt") pod "74fefb64-8607-40f3-aeb6-b4578ed8d91c" (UID: "74fefb64-8607-40f3-aeb6-b4578ed8d91c"). InnerVolumeSpecName "kube-api-access-7btlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.021386 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4" (OuterVolumeSpecName: "kube-api-access-wh8j4") pod "6ddd1046-c918-4c58-921d-5108500a388f" (UID: "6ddd1046-c918-4c58-921d-5108500a388f"). InnerVolumeSpecName "kube-api-access-wh8j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.039797 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74fefb64-8607-40f3-aeb6-b4578ed8d91c" (UID: "74fefb64-8607-40f3-aeb6-b4578ed8d91c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.070605 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" (UID: "8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.071447 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5361e42c-4e4e-43ff-b7dc-e02436e5d46c" (UID: "5361e42c-4e4e-43ff-b7dc-e02436e5d46c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112838 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112874 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112900 5000 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112912 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh8j4\" (UniqueName: \"kubernetes.io/projected/6ddd1046-c918-4c58-921d-5108500a388f-kube-api-access-wh8j4\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112922 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112930 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112938 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdvvm\" (UniqueName: \"kubernetes.io/projected/7f1846c9-70fd-44b0-8ea0-f0d67a308185-kube-api-access-mdvvm\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112946 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112954 5000 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f1846c9-70fd-44b0-8ea0-f0d67a308185-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112962 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hn2h\" (UniqueName: \"kubernetes.io/projected/5361e42c-4e4e-43ff-b7dc-e02436e5d46c-kube-api-access-6hn2h\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112971 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7btlt\" (UniqueName: \"kubernetes.io/projected/74fefb64-8607-40f3-aeb6-b4578ed8d91c-kube-api-access-7btlt\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112979 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112987 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74fefb64-8607-40f3-aeb6-b4578ed8d91c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.112998 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vkfq\" (UniqueName: \"kubernetes.io/projected/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3-kube-api-access-9vkfq\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.140361 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ddd1046-c918-4c58-921d-5108500a388f" (UID: "6ddd1046-c918-4c58-921d-5108500a388f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.214488 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ddd1046-c918-4c58-921d-5108500a388f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.215264 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8trn"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.828342 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" event={"ID":"7f1846c9-70fd-44b0-8ea0-f0d67a308185","Type":"ContainerDied","Data":"6b00f8ad9ca912c2b54bd76506f8675b456e4d0ffbffa730c59395705cd5fe89"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.828405 5000 scope.go:117] "RemoveContainer" containerID="2ccbdb54d59ba15bd05e8e24636a470255c914a0a902e132f1cb888f6ed9ffb6" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.828431 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpmdv" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.832080 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blwk8" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.832148 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blwk8" event={"ID":"5361e42c-4e4e-43ff-b7dc-e02436e5d46c","Type":"ContainerDied","Data":"2e8d133819163864c121641a9f802e9356fdf6bf38c2ee14ae7ec3c188dc914b"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.835364 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wx9jq" event={"ID":"74fefb64-8607-40f3-aeb6-b4578ed8d91c","Type":"ContainerDied","Data":"6641a8e64346a964f9ccb9b5fccd123d991ff60977865d90a15ef82809e3ce5e"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.835498 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wx9jq" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.837172 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" event={"ID":"28f7248c-0908-4c50-8c47-14d96f5c8665","Type":"ContainerStarted","Data":"3fc9371c82609bcc3dab29c0851dce5c3514b1158231621bebf9f682a21f0e2d"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.837445 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" event={"ID":"28f7248c-0908-4c50-8c47-14d96f5c8665","Type":"ContainerStarted","Data":"0f48ca89a6128d9b4c6a90236645604f655fa03809eb4ef1d9206ae648de9157"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.837918 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.840650 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-srwl2" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.841384 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bsh5l" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.841735 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bsh5l" event={"ID":"6ddd1046-c918-4c58-921d-5108500a388f","Type":"ContainerDied","Data":"c929dd1a8b9511fc0af152c19b3040b478693fc14f03b177eef658c7b04ecaad"} Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.845139 5000 scope.go:117] "RemoveContainer" containerID="b95064b0b81bf0cb8a04baedd0333d31787eb98805bb31706ff60bb1a8a1696c" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.846589 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.849963 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.862651 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpmdv"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.866588 5000 scope.go:117] "RemoveContainer" containerID="8c408d21b704f12ddb31516d5c7344450a8fc50776a1bc8d2be27aec204223ca" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.869292 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.873439 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bsh5l"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.885308 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.889680 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wx9jq"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.894667 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.899862 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-srwl2"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.933779 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d8trn" podStartSLOduration=1.933764533 podStartE2EDuration="1.933764533s" podCreationTimestamp="2026-01-05 21:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:38:03.933343401 +0000 UTC m=+238.889545870" watchObservedRunningTime="2026-01-05 21:38:03.933764533 +0000 UTC m=+238.889967002" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.935995 5000 scope.go:117] "RemoveContainer" containerID="49296fd31720d85bd0ca0a1e7a6106b94bb7cc287328bc26356ceef7fb03356b" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.966881 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.969114 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-blwk8"] Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.974076 5000 scope.go:117] "RemoveContainer" containerID="5d0f48a962674a3b94daf948074c047971cd451728b9baf45bf299f11af48cec" Jan 05 21:38:03 crc kubenswrapper[5000]: I0105 21:38:03.989120 5000 scope.go:117] "RemoveContainer" containerID="0583cb254e7c012493f4fa89bd24e693ab3321682a6f47734ef496d2bbd86747" Jan 05 21:38:04 crc kubenswrapper[5000]: I0105 21:38:04.005196 5000 scope.go:117] "RemoveContainer" containerID="2f4271bc2238447d45778c381a6765678a1f5f1412cc6d223974003faca85d9b" Jan 05 21:38:04 crc kubenswrapper[5000]: I0105 21:38:04.018900 5000 scope.go:117] "RemoveContainer" containerID="1b5d28f1e19a9b58a78523ac55e79b2ef6abd20cafd57b9216f21f7b2f533332" Jan 05 21:38:04 crc kubenswrapper[5000]: I0105 21:38:04.037762 5000 scope.go:117] "RemoveContainer" containerID="e1929e22958f702cd682744a72a286b0c83e8dae14e2a63e4f67387f239f26ce" Jan 05 21:38:04 crc kubenswrapper[5000]: I0105 21:38:04.057692 5000 scope.go:117] "RemoveContainer" containerID="012defa5570114cf9e342e75d36ce2ededfb89fe7557afc54023cc55edb7c1a0" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196107 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tnrhc"] Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196655 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196752 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196770 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196779 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196791 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196799 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196808 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196815 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196827 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196834 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196843 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196850 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196858 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196864 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="extract-utilities" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196873 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196881 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196910 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196918 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196927 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196934 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196945 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196951 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196961 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196969 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="extract-content" Jan 05 21:38:05 crc kubenswrapper[5000]: E0105 21:38:05.196978 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.196985 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.197171 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ddd1046-c918-4c58-921d-5108500a388f" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.197210 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.197222 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.197236 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" containerName="marketplace-operator" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.197243 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" containerName="registry-server" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.206730 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.209624 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.220745 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tnrhc"] Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.329471 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5361e42c-4e4e-43ff-b7dc-e02436e5d46c" path="/var/lib/kubelet/pods/5361e42c-4e4e-43ff-b7dc-e02436e5d46c/volumes" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.330201 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ddd1046-c918-4c58-921d-5108500a388f" path="/var/lib/kubelet/pods/6ddd1046-c918-4c58-921d-5108500a388f/volumes" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.330955 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fefb64-8607-40f3-aeb6-b4578ed8d91c" path="/var/lib/kubelet/pods/74fefb64-8607-40f3-aeb6-b4578ed8d91c/volumes" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.332129 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1846c9-70fd-44b0-8ea0-f0d67a308185" path="/var/lib/kubelet/pods/7f1846c9-70fd-44b0-8ea0-f0d67a308185/volumes" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.332947 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3" path="/var/lib/kubelet/pods/8496d295-62ec-4bf8-81bb-e9bcd7c2d0e3/volumes" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.339128 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-utilities\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.339269 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz27\" (UniqueName: \"kubernetes.io/projected/05627cab-34e2-43e0-abd1-c730dfde0fb3-kube-api-access-lkz27\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.339352 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-catalog-content\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.440604 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-utilities\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.440701 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkz27\" (UniqueName: \"kubernetes.io/projected/05627cab-34e2-43e0-abd1-c730dfde0fb3-kube-api-access-lkz27\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.440740 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-catalog-content\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.441170 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-utilities\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.441246 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05627cab-34e2-43e0-abd1-c730dfde0fb3-catalog-content\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.461647 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkz27\" (UniqueName: \"kubernetes.io/projected/05627cab-34e2-43e0-abd1-c730dfde0fb3-kube-api-access-lkz27\") pod \"redhat-operators-tnrhc\" (UID: \"05627cab-34e2-43e0-abd1-c730dfde0fb3\") " pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.533608 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.541370 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.718731 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tnrhc"] Jan 05 21:38:05 crc kubenswrapper[5000]: W0105 21:38:05.731675 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05627cab_34e2_43e0_abd1_c730dfde0fb3.slice/crio-d1523cd1c19e7baf987a46725e2e5bd417f374d49bdd97e24ebc2535971a4ffe WatchSource:0}: Error finding container d1523cd1c19e7baf987a46725e2e5bd417f374d49bdd97e24ebc2535971a4ffe: Status 404 returned error can't find the container with id d1523cd1c19e7baf987a46725e2e5bd417f374d49bdd97e24ebc2535971a4ffe Jan 05 21:38:05 crc kubenswrapper[5000]: I0105 21:38:05.856387 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrhc" event={"ID":"05627cab-34e2-43e0-abd1-c730dfde0fb3","Type":"ContainerStarted","Data":"d1523cd1c19e7baf987a46725e2e5bd417f374d49bdd97e24ebc2535971a4ffe"} Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.196473 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-527mn"] Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.197402 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.199553 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.209133 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-527mn"] Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.321983 5000 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.323246 5000 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.325696 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.326409 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397" gracePeriod=15 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.326658 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28" gracePeriod=15 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.326848 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c" gracePeriod=15 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.327243 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6" gracePeriod=15 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.328112 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477" gracePeriod=15 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329061 5000 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329350 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329368 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329382 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329389 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329402 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329410 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329424 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329432 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329448 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329456 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329465 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329473 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.329491 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329499 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329670 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329685 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329694 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329706 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329716 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.329723 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352053 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352319 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352350 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352379 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-utilities\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352397 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352452 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352467 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352521 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-catalog-content\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352544 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352559 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.352591 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.417757 5000 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453390 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-catalog-content\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453439 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453457 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453495 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453813 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453862 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453910 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453929 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453947 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.453981 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-utilities\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454005 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.454016 5000 projected.go:194] Error preparing data for projected volume kube-api-access-lnctx for pod openshift-marketplace/certified-operators-527mn: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.454083 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx podName:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe nodeName:}" failed. No retries permitted until 2026-01-05 21:38:06.954064007 +0000 UTC m=+241.910266476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lnctx" (UniqueName: "kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx") pod "certified-operators-527mn" (UID: "82b26bf1-ce94-4d00-b00d-fda0c33a2dfe") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454095 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-catalog-content\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454103 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454108 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454131 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454141 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454188 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454208 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454224 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454318 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.454363 5000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-527mn.1887f37a8da3487f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-527mn,UID:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe,APIVersion:v1,ResourceVersion:29656,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-lnctx\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token\": dial tcp 38.102.83.110:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:38:06.454057087 +0000 UTC m=+241.410259556,LastTimestamp:2026-01-05 21:38:06.454057087 +0000 UTC m=+241.410259556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.454456 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-utilities\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.718964 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:06 crc kubenswrapper[5000]: W0105 21:38:06.736235 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f52efc75f071577db5825f02737f1c70e1d89a66fd4ae1ef645fc1eb3dbcc2d6 WatchSource:0}: Error finding container f52efc75f071577db5825f02737f1c70e1d89a66fd4ae1ef645fc1eb3dbcc2d6: Status 404 returned error can't find the container with id f52efc75f071577db5825f02737f1c70e1d89a66fd4ae1ef645fc1eb3dbcc2d6 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.870308 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f52efc75f071577db5825f02737f1c70e1d89a66fd4ae1ef645fc1eb3dbcc2d6"} Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.872816 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.873877 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.874650 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28" exitCode=0 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.874669 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6" exitCode=0 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.874677 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c" exitCode=0 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.874684 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477" exitCode=2 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.874764 5000 scope.go:117] "RemoveContainer" containerID="5d28a96da8d49f12c3328d523f2a5027c6af4434217e1afcf036abf0dd9d7a9e" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.876370 5000 generic.go:334] "Generic (PLEG): container finished" podID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" containerID="d928076cc906ca2314abde6d7371ee9718ca5282714586f574220df15f99f232" exitCode=0 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.876426 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2199f70b-6ba2-4e30-8e73-7eb7fc512d37","Type":"ContainerDied","Data":"d928076cc906ca2314abde6d7371ee9718ca5282714586f574220df15f99f232"} Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.877292 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.877584 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.878843 5000 generic.go:334] "Generic (PLEG): container finished" podID="05627cab-34e2-43e0-abd1-c730dfde0fb3" containerID="8ae1572970036c42221127215b2af06dcce0cab47e2de8452bb403497a01ceb3" exitCode=0 Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.878862 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrhc" event={"ID":"05627cab-34e2-43e0-abd1-c730dfde0fb3","Type":"ContainerDied","Data":"8ae1572970036c42221127215b2af06dcce0cab47e2de8452bb403497a01ceb3"} Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.879703 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.879865 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.880035 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:06 crc kubenswrapper[5000]: I0105 21:38:06.960727 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.961481 5000 projected.go:194] Error preparing data for projected volume kube-api-access-lnctx for pod openshift-marketplace/certified-operators-527mn: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:06 crc kubenswrapper[5000]: E0105 21:38:06.961541 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx podName:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe nodeName:}" failed. No retries permitted until 2026-01-05 21:38:07.961523573 +0000 UTC m=+242.917726052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lnctx" (UniqueName: "kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx") pod "certified-operators-527mn" (UID: "82b26bf1-ce94-4d00-b00d-fda0c33a2dfe") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.108689 5000 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.108752 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.886445 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5"} Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.887201 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:07 crc kubenswrapper[5000]: E0105 21:38:07.887478 5000 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.887616 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.889842 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.892351 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrhc" event={"ID":"05627cab-34e2-43e0-abd1-c730dfde0fb3","Type":"ContainerStarted","Data":"261f639f704e927bd2ee210b1150e40da0eb530ad67e6530cd49a5c052a58f51"} Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.893136 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.894274 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:07 crc kubenswrapper[5000]: I0105 21:38:07.973479 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:07 crc kubenswrapper[5000]: E0105 21:38:07.974479 5000 projected.go:194] Error preparing data for projected volume kube-api-access-lnctx for pod openshift-marketplace/certified-operators-527mn: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:07 crc kubenswrapper[5000]: E0105 21:38:07.974609 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx podName:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe nodeName:}" failed. No retries permitted until 2026-01-05 21:38:09.974568638 +0000 UTC m=+244.930771148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lnctx" (UniqueName: "kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx") pod "certified-operators-527mn" (UID: "82b26bf1-ce94-4d00-b00d-fda0c33a2dfe") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.128737 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.130018 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.130614 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.276974 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock\") pod \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277103 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access\") pod \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277141 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock" (OuterVolumeSpecName: "var-lock") pod "2199f70b-6ba2-4e30-8e73-7eb7fc512d37" (UID: "2199f70b-6ba2-4e30-8e73-7eb7fc512d37"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277192 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir\") pod \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\" (UID: \"2199f70b-6ba2-4e30-8e73-7eb7fc512d37\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277271 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2199f70b-6ba2-4e30-8e73-7eb7fc512d37" (UID: "2199f70b-6ba2-4e30-8e73-7eb7fc512d37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277444 5000 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.277458 5000 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.281476 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2199f70b-6ba2-4e30-8e73-7eb7fc512d37" (UID: "2199f70b-6ba2-4e30-8e73-7eb7fc512d37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.378392 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2199f70b-6ba2-4e30-8e73-7eb7fc512d37-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.671175 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.672384 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.673146 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.673604 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.674017 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783222 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783329 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783472 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783498 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783510 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.783658 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.784108 5000 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.784135 5000 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.784148 5000 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.910025 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.910650 5000 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397" exitCode=0 Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.910719 5000 scope.go:117] "RemoveContainer" containerID="b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.910873 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.919585 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.919582 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2199f70b-6ba2-4e30-8e73-7eb7fc512d37","Type":"ContainerDied","Data":"de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7"} Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.919757 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de5b18ebde4927968f3b65870211c53eb4d983b6d33b7840d8327c27e10393e7" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.921726 5000 generic.go:334] "Generic (PLEG): container finished" podID="05627cab-34e2-43e0-abd1-c730dfde0fb3" containerID="261f639f704e927bd2ee210b1150e40da0eb530ad67e6530cd49a5c052a58f51" exitCode=0 Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.921793 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrhc" event={"ID":"05627cab-34e2-43e0-abd1-c730dfde0fb3","Type":"ContainerDied","Data":"261f639f704e927bd2ee210b1150e40da0eb530ad67e6530cd49a5c052a58f51"} Jan 05 21:38:08 crc kubenswrapper[5000]: E0105 21:38:08.922443 5000 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.924758 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.925247 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.926304 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.928982 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.929135 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.929282 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.945514 5000 scope.go:117] "RemoveContainer" containerID="4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.946391 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.947250 5000 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.948437 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.958328 5000 scope.go:117] "RemoveContainer" containerID="38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.977446 5000 scope.go:117] "RemoveContainer" containerID="60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477" Jan 05 21:38:08 crc kubenswrapper[5000]: I0105 21:38:08.994583 5000 scope.go:117] "RemoveContainer" containerID="d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.010352 5000 scope.go:117] "RemoveContainer" containerID="e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.026492 5000 scope.go:117] "RemoveContainer" containerID="b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.026962 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\": container with ID starting with b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28 not found: ID does not exist" containerID="b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027041 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28"} err="failed to get container status \"b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\": rpc error: code = NotFound desc = could not find container \"b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28\": container with ID starting with b527f551c544b48d987d09b9dcd217d39860ad667aa3e0f676c59325d955ee28 not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027083 5000 scope.go:117] "RemoveContainer" containerID="4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.027432 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\": container with ID starting with 4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6 not found: ID does not exist" containerID="4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027469 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6"} err="failed to get container status \"4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\": rpc error: code = NotFound desc = could not find container \"4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6\": container with ID starting with 4c35148d5c3a75103bad8b9f5fb0ec4673b6eb6e4be20862ae539326970f39a6 not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027499 5000 scope.go:117] "RemoveContainer" containerID="38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.027770 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\": container with ID starting with 38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c not found: ID does not exist" containerID="38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027798 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c"} err="failed to get container status \"38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\": rpc error: code = NotFound desc = could not find container \"38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c\": container with ID starting with 38bf58b77e48d6faa2822fda13f66264634a03437f18acedd747ac0df45e0b2c not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.027815 5000 scope.go:117] "RemoveContainer" containerID="60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.028198 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\": container with ID starting with 60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477 not found: ID does not exist" containerID="60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.028232 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477"} err="failed to get container status \"60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\": rpc error: code = NotFound desc = could not find container \"60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477\": container with ID starting with 60b1303cb223b307c6aecee2d3d3e3b6323ce9069f30b7ace851b88a155b3477 not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.028254 5000 scope.go:117] "RemoveContainer" containerID="d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.028458 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\": container with ID starting with d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397 not found: ID does not exist" containerID="d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.028484 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397"} err="failed to get container status \"d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\": rpc error: code = NotFound desc = could not find container \"d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397\": container with ID starting with d83d8f9c6efb075a10c8e2e5356820dacd9f6c30b0f430a5521d712e44085397 not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.028497 5000 scope.go:117] "RemoveContainer" containerID="e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.028702 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\": container with ID starting with e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65 not found: ID does not exist" containerID="e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.028731 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65"} err="failed to get container status \"e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\": rpc error: code = NotFound desc = could not find container \"e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65\": container with ID starting with e44df1460beab95ac6dcc36830c3dcbf01bac660917e6df8c76efced4894cc65 not found: ID does not exist" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.333754 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.930659 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tnrhc" event={"ID":"05627cab-34e2-43e0-abd1-c730dfde0fb3","Type":"ContainerStarted","Data":"7a9bf43ba9fd0ed6c18e4c8e0fbf539bcde1e90c94362593e9386aba283c1e54"} Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.932849 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.933184 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:09 crc kubenswrapper[5000]: I0105 21:38:09.996084 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.996782 5000 projected.go:194] Error preparing data for projected volume kube-api-access-lnctx for pod openshift-marketplace/certified-operators-527mn: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:09 crc kubenswrapper[5000]: E0105 21:38:09.996860 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx podName:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe nodeName:}" failed. No retries permitted until 2026-01-05 21:38:13.996840116 +0000 UTC m=+248.953042585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lnctx" (UniqueName: "kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx") pod "certified-operators-527mn" (UID: "82b26bf1-ce94-4d00-b00d-fda0c33a2dfe") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.342248 5000 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.342951 5000 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.343183 5000 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.343643 5000 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.344088 5000 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:10 crc kubenswrapper[5000]: I0105 21:38:10.344113 5000 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.344275 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.545717 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Jan 05 21:38:10 crc kubenswrapper[5000]: E0105 21:38:10.946220 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Jan 05 21:38:11 crc kubenswrapper[5000]: E0105 21:38:11.747773 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Jan 05 21:38:13 crc kubenswrapper[5000]: E0105 21:38:13.199815 5000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-527mn.1887f37a8da3487f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-527mn,UID:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe,APIVersion:v1,ResourceVersion:29656,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-lnctx\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token\": dial tcp 38.102.83.110:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-05 21:38:06.454057087 +0000 UTC m=+241.410259556,LastTimestamp:2026-01-05 21:38:06.454057087 +0000 UTC m=+241.410259556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 05 21:38:13 crc kubenswrapper[5000]: E0105 21:38:13.350361 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Jan 05 21:38:14 crc kubenswrapper[5000]: I0105 21:38:14.069637 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:14 crc kubenswrapper[5000]: E0105 21:38:14.070438 5000 projected.go:194] Error preparing data for projected volume kube-api-access-lnctx for pod openshift-marketplace/certified-operators-527mn: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:14 crc kubenswrapper[5000]: E0105 21:38:14.070579 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx podName:82b26bf1-ce94-4d00-b00d-fda0c33a2dfe nodeName:}" failed. No retries permitted until 2026-01-05 21:38:22.070558694 +0000 UTC m=+257.026761153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lnctx" (UniqueName: "kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx") pod "certified-operators-527mn" (UID: "82b26bf1-ce94-4d00-b00d-fda0c33a2dfe") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 38.102.83.110:6443: connect: connection refused Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.325534 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.326312 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.543766 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.543817 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.593537 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.594055 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.594322 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.991766 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tnrhc" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.992281 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:15 crc kubenswrapper[5000]: I0105 21:38:15.992665 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:16 crc kubenswrapper[5000]: E0105 21:38:16.552091 5000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="6.4s" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.322876 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.324318 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.324594 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.337842 5000 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.337878 5000 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:19 crc kubenswrapper[5000]: E0105 21:38:19.338351 5000 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.338709 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:19 crc kubenswrapper[5000]: I0105 21:38:19.987414 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"170e05097e26aebfa8dcd5a302ed59e93d29863a952bf6d77676ecf4b9494c44"} Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.961971 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.962039 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.994298 5000 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="61b20a8d0d110c5307b790a07deea57e1f475cbf1e75f43ba500681ed3840c67" exitCode=0 Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.994367 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"61b20a8d0d110c5307b790a07deea57e1f475cbf1e75f43ba500681ed3840c67"} Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.994632 5000 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.994665 5000 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.995172 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.995373 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:20 crc kubenswrapper[5000]: E0105 21:38:20.995836 5000 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.997341 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.997379 5000 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117" exitCode=1 Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.997397 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117"} Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.997841 5000 scope.go:117] "RemoveContainer" containerID="778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.998484 5000 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.998863 5000 status_manager.go:851] "Failed to get status for pod" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:20 crc kubenswrapper[5000]: I0105 21:38:20.999087 5000 status_manager.go:851] "Failed to get status for pod" podUID="05627cab-34e2-43e0-abd1-c730dfde0fb3" pod="openshift-marketplace/redhat-operators-tnrhc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tnrhc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.012447 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.012805 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45722b0553c68bad33a16e7a53051fd6ac21dd702ec4a15f33d62987b2099cd2"} Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.023213 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"88fa5e64bb36fb916e7d800fd2579a18307dba6415f6592d2291c978339e004d"} Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.023257 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ea170f5dd8987f9efb4e77f6de64d64735bf9efc1082898c6d480c603b062ec"} Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.023300 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"092be04207bc8b45ef0119b17101942e9a948372238e4783eef1a5683e7828cd"} Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.023317 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fb65b7c5b69981d3f72eb6e85a4a1271ed59dc03a9c5e348c4fcd12939c44b54"} Jan 05 21:38:22 crc kubenswrapper[5000]: I0105 21:38:22.076283 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.034060 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b36e584d00d34590cb5312445da62ed45fe0515a876724a43137b4150459bf26"} Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.034623 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.034499 5000 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.034668 5000 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.380331 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.381770 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:38:23 crc kubenswrapper[5000]: I0105 21:38:23.381908 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:38:24 crc kubenswrapper[5000]: I0105 21:38:24.339796 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:24 crc kubenswrapper[5000]: I0105 21:38:24.339848 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:24 crc kubenswrapper[5000]: I0105 21:38:24.346034 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:27 crc kubenswrapper[5000]: I0105 21:38:27.712534 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnctx\" (UniqueName: \"kubernetes.io/projected/82b26bf1-ce94-4d00-b00d-fda0c33a2dfe-kube-api-access-lnctx\") pod \"certified-operators-527mn\" (UID: \"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe\") " pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:27 crc kubenswrapper[5000]: I0105 21:38:27.820816 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:28 crc kubenswrapper[5000]: I0105 21:38:28.043441 5000 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:28 crc kubenswrapper[5000]: I0105 21:38:28.061227 5000 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:28 crc kubenswrapper[5000]: I0105 21:38:28.061577 5000 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:28 crc kubenswrapper[5000]: I0105 21:38:28.067907 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:28 crc kubenswrapper[5000]: I0105 21:38:28.070815 5000 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="84b1434a-406d-4210-a395-3100f7958215" Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.074844 5000 generic.go:334] "Generic (PLEG): container finished" podID="82b26bf1-ce94-4d00-b00d-fda0c33a2dfe" containerID="a1e1888e7461150e9ebaf3f27ffb1851026f5d6ca8c298101e2b9fc0bd877ae7" exitCode=0 Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.075185 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-527mn" event={"ID":"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe","Type":"ContainerDied","Data":"a1e1888e7461150e9ebaf3f27ffb1851026f5d6ca8c298101e2b9fc0bd877ae7"} Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.075806 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-527mn" event={"ID":"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe","Type":"ContainerStarted","Data":"a496bd5ed3cbfcd5cb25abdf01d9c661f4828525e052a0da69d9c3ad951d6b28"} Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.075865 5000 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.075880 5000 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bde19f36-8816-4b31-a711-82b9d90f0855" Jan 05 21:38:29 crc kubenswrapper[5000]: I0105 21:38:29.440182 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:38:30 crc kubenswrapper[5000]: I0105 21:38:30.084518 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-527mn" event={"ID":"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe","Type":"ContainerStarted","Data":"6e2950f122329a3928a2038fe90d3b28c4f6f840a981cb9e153f386a4eddf254"} Jan 05 21:38:31 crc kubenswrapper[5000]: I0105 21:38:31.091504 5000 generic.go:334] "Generic (PLEG): container finished" podID="82b26bf1-ce94-4d00-b00d-fda0c33a2dfe" containerID="6e2950f122329a3928a2038fe90d3b28c4f6f840a981cb9e153f386a4eddf254" exitCode=0 Jan 05 21:38:31 crc kubenswrapper[5000]: I0105 21:38:31.091549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-527mn" event={"ID":"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe","Type":"ContainerDied","Data":"6e2950f122329a3928a2038fe90d3b28c4f6f840a981cb9e153f386a4eddf254"} Jan 05 21:38:32 crc kubenswrapper[5000]: I0105 21:38:32.106076 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-527mn" event={"ID":"82b26bf1-ce94-4d00-b00d-fda0c33a2dfe","Type":"ContainerStarted","Data":"bed18deab063c956fcdd243a878abb7379b6005b052ee9933b4c55f4be30c848"} Jan 05 21:38:33 crc kubenswrapper[5000]: I0105 21:38:33.380183 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:38:33 crc kubenswrapper[5000]: I0105 21:38:33.380822 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:38:35 crc kubenswrapper[5000]: I0105 21:38:35.347648 5000 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="84b1434a-406d-4210-a395-3100f7958215" Jan 05 21:38:37 crc kubenswrapper[5000]: I0105 21:38:37.821162 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:37 crc kubenswrapper[5000]: I0105 21:38:37.823051 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:37 crc kubenswrapper[5000]: I0105 21:38:37.871097 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:37 crc kubenswrapper[5000]: I0105 21:38:37.926848 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 05 21:38:37 crc kubenswrapper[5000]: I0105 21:38:37.986116 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 05 21:38:38 crc kubenswrapper[5000]: I0105 21:38:38.182084 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-527mn" Jan 05 21:38:38 crc kubenswrapper[5000]: I0105 21:38:38.663467 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 05 21:38:38 crc kubenswrapper[5000]: I0105 21:38:38.811772 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.023622 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.418533 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.562916 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.779634 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.797165 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 05 21:38:39 crc kubenswrapper[5000]: I0105 21:38:39.986635 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.092590 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.105565 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.237797 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.238481 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.244515 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.276227 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.281905 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.290444 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.300285 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.428578 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.491532 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.529781 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.597470 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.640156 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.761689 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.797371 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.832945 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 05 21:38:40 crc kubenswrapper[5000]: I0105 21:38:40.942173 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.001927 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.028967 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.279165 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.300698 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.340756 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.412104 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.430184 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.503607 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.562175 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.575029 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.649740 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.656091 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.691723 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.767604 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.882975 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.931574 5000 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 05 21:38:41 crc kubenswrapper[5000]: I0105 21:38:41.991951 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.087947 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.197004 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.211313 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.211379 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.322195 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.367610 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.376149 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.543361 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.781736 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.831021 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.847508 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.891750 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.951932 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.956788 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 05 21:38:42 crc kubenswrapper[5000]: I0105 21:38:42.993043 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.047512 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.067010 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.123823 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.142300 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.288664 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.366391 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.379697 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.379768 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.379833 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.380844 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"45722b0553c68bad33a16e7a53051fd6ac21dd702ec4a15f33d62987b2099cd2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.381033 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://45722b0553c68bad33a16e7a53051fd6ac21dd702ec4a15f33d62987b2099cd2" gracePeriod=30 Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.386118 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.467218 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.532175 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.604813 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.615671 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.681870 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.886648 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.935732 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.979097 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 05 21:38:43 crc kubenswrapper[5000]: I0105 21:38:43.985181 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.004651 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.031575 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.039677 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.083767 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.166395 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.181297 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.181514 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.201519 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.202575 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.260952 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.332704 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.334075 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.352225 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.508350 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.511402 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.650267 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.698678 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.924308 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.963640 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 05 21:38:44 crc kubenswrapper[5000]: I0105 21:38:44.979981 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.059490 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.060877 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.104376 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.129299 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.149692 5000 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.167156 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.207354 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.423422 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.464646 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.520679 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.566324 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.567798 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.611130 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.720613 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.757928 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 05 21:38:45 crc kubenswrapper[5000]: I0105 21:38:45.832714 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.034243 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.161523 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.161881 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.163870 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.295916 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.304194 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.408781 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.414480 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.454626 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.480763 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.487227 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.560102 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.570506 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.607152 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.626405 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.654102 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.663753 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.733418 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.744486 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.754381 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.760460 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 05 21:38:46 crc kubenswrapper[5000]: I0105 21:38:46.924308 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.040343 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.045870 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.049982 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.091027 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.138669 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.159234 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.218268 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.234800 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.281523 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.403852 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.451801 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.474356 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.496499 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.509289 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.546018 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.603301 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.651581 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.711215 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.722997 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.800504 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.814859 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.890758 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.922288 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 05 21:38:47 crc kubenswrapper[5000]: I0105 21:38:47.952337 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.011648 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.013872 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.025658 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.123932 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.201925 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.239136 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.241183 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.340705 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.385397 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.398468 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.416024 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.439802 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.445789 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.560518 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.631509 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.697739 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.791428 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.805982 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.819142 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.898445 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 05 21:38:48 crc kubenswrapper[5000]: I0105 21:38:48.957021 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.033414 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.070054 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.153586 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.201610 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.225660 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.273851 5000 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.299307 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.461301 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.495086 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.574484 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.644857 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.715136 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.855872 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.869710 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 05 21:38:49 crc kubenswrapper[5000]: I0105 21:38:49.951385 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.011769 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.023885 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.063688 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.192354 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.241297 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.277783 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.333483 5000 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.340231 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tnrhc" podStartSLOduration=42.847601057 podStartE2EDuration="45.340207613s" podCreationTimestamp="2026-01-05 21:38:05 +0000 UTC" firstStartedPulling="2026-01-05 21:38:06.880714675 +0000 UTC m=+241.836917144" lastFinishedPulling="2026-01-05 21:38:09.373321231 +0000 UTC m=+244.329523700" observedRunningTime="2026-01-05 21:38:27.764644329 +0000 UTC m=+262.720846818" watchObservedRunningTime="2026-01-05 21:38:50.340207613 +0000 UTC m=+285.296410122" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.340593 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-527mn" podStartSLOduration=41.930122403 podStartE2EDuration="44.340585985s" podCreationTimestamp="2026-01-05 21:38:06 +0000 UTC" firstStartedPulling="2026-01-05 21:38:29.077846119 +0000 UTC m=+264.034048588" lastFinishedPulling="2026-01-05 21:38:31.488309701 +0000 UTC m=+266.444512170" observedRunningTime="2026-01-05 21:38:32.123327112 +0000 UTC m=+267.079529631" watchObservedRunningTime="2026-01-05 21:38:50.340585985 +0000 UTC m=+285.296788494" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.341594 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.341656 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.341693 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-527mn"] Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.347355 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.358145 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.360866 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.368724 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.370184 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.370164861 podStartE2EDuration="22.370164861s" podCreationTimestamp="2026-01-05 21:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:38:50.369990256 +0000 UTC m=+285.326192735" watchObservedRunningTime="2026-01-05 21:38:50.370164861 +0000 UTC m=+285.326367340" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.390044 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.424223 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.437057 5000 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.437071 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.437327 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5" gracePeriod=5 Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.473504 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.519976 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.569274 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.675620 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.767102 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.815477 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.836131 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.905297 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.935432 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.947148 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 05 21:38:50 crc kubenswrapper[5000]: I0105 21:38:50.969843 5000 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.052183 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.188577 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.192368 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.201007 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.265194 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.378778 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.461923 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.601288 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.786702 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.789118 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.875556 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.876723 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.933053 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 05 21:38:51 crc kubenswrapper[5000]: I0105 21:38:51.968610 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.077868 5000 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.133117 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.141032 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.454536 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.715153 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.805225 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.814862 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.949163 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 05 21:38:52 crc kubenswrapper[5000]: I0105 21:38:52.976664 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.059472 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.510704 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.682965 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.787516 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.834711 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.949624 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 05 21:38:53 crc kubenswrapper[5000]: I0105 21:38:53.957162 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 05 21:38:54 crc kubenswrapper[5000]: I0105 21:38:54.087176 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 05 21:38:54 crc kubenswrapper[5000]: I0105 21:38:54.256689 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.021815 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.022247 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180377 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180431 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180459 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180491 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180532 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180560 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180558 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180616 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180675 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180931 5000 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180950 5000 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180965 5000 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.180977 5000 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.189064 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.251066 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.251114 5000 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5" exitCode=137 Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.251153 5000 scope.go:117] "RemoveContainer" containerID="9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.251202 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.280002 5000 scope.go:117] "RemoveContainer" containerID="9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5" Jan 05 21:38:56 crc kubenswrapper[5000]: E0105 21:38:56.280471 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5\": container with ID starting with 9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5 not found: ID does not exist" containerID="9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.280530 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5"} err="failed to get container status \"9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5\": rpc error: code = NotFound desc = could not find container \"9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5\": container with ID starting with 9b5a57fa83df905062cd2d9b85ead1b18f308e63f0956e6868f51804e71092d5 not found: ID does not exist" Jan 05 21:38:56 crc kubenswrapper[5000]: I0105 21:38:56.282373 5000 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 05 21:38:57 crc kubenswrapper[5000]: I0105 21:38:57.337546 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.356121 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.359105 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.359154 5000 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="45722b0553c68bad33a16e7a53051fd6ac21dd702ec4a15f33d62987b2099cd2" exitCode=137 Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.359183 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"45722b0553c68bad33a16e7a53051fd6ac21dd702ec4a15f33d62987b2099cd2"} Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.359208 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ebcf2b569b840c729b07ffc76aafe93149f3b511bf71e3b7099d3487896e352"} Jan 05 21:39:14 crc kubenswrapper[5000]: I0105 21:39:14.359227 5000 scope.go:117] "RemoveContainer" containerID="778b61d4c61d9f68fead3c5a4f26218a6d81a3c77782916233ba1fdde2ed7117" Jan 05 21:39:15 crc kubenswrapper[5000]: I0105 21:39:15.366615 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 05 21:39:19 crc kubenswrapper[5000]: I0105 21:39:19.440296 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:39:23 crc kubenswrapper[5000]: I0105 21:39:23.379283 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:39:23 crc kubenswrapper[5000]: I0105 21:39:23.379552 5000 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 05 21:39:23 crc kubenswrapper[5000]: I0105 21:39:23.379739 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 05 21:39:33 crc kubenswrapper[5000]: I0105 21:39:33.387086 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:39:33 crc kubenswrapper[5000]: I0105 21:39:33.395313 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.221120 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.221846 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" podUID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" containerName="controller-manager" containerID="cri-o://6e8b9a2523ea6996d59527c4d44c62e033ef03c43c74d80503b737fda6e10e34" gracePeriod=30 Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.234402 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.234950 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerName="route-controller-manager" containerID="cri-o://bd405d30ee6bff63ae848c7ad9bdfd880f9a3294acc48fdc11ec2dfc8c8753ac" gracePeriod=30 Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.535671 5000 generic.go:334] "Generic (PLEG): container finished" podID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" containerID="6e8b9a2523ea6996d59527c4d44c62e033ef03c43c74d80503b737fda6e10e34" exitCode=0 Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.535760 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" event={"ID":"c661b9d0-ba17-41d2-94dd-f1c71fe529d0","Type":"ContainerDied","Data":"6e8b9a2523ea6996d59527c4d44c62e033ef03c43c74d80503b737fda6e10e34"} Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.537939 5000 generic.go:334] "Generic (PLEG): container finished" podID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerID="bd405d30ee6bff63ae848c7ad9bdfd880f9a3294acc48fdc11ec2dfc8c8753ac" exitCode=0 Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.537990 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" event={"ID":"bbe6c3a1-1534-4095-9e25-1f4ce093938e","Type":"ContainerDied","Data":"bd405d30ee6bff63ae848c7ad9bdfd880f9a3294acc48fdc11ec2dfc8c8753ac"} Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.625010 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.686330 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.748511 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca\") pod \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.748585 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config\") pod \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.748618 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles\") pod \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.748651 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b64r7\" (UniqueName: \"kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7\") pod \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.748775 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert\") pod \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\" (UID: \"c661b9d0-ba17-41d2-94dd-f1c71fe529d0\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.750183 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c661b9d0-ba17-41d2-94dd-f1c71fe529d0" (UID: "c661b9d0-ba17-41d2-94dd-f1c71fe529d0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.750216 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config" (OuterVolumeSpecName: "config") pod "c661b9d0-ba17-41d2-94dd-f1c71fe529d0" (UID: "c661b9d0-ba17-41d2-94dd-f1c71fe529d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.750236 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca" (OuterVolumeSpecName: "client-ca") pod "c661b9d0-ba17-41d2-94dd-f1c71fe529d0" (UID: "c661b9d0-ba17-41d2-94dd-f1c71fe529d0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.753864 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7" (OuterVolumeSpecName: "kube-api-access-b64r7") pod "c661b9d0-ba17-41d2-94dd-f1c71fe529d0" (UID: "c661b9d0-ba17-41d2-94dd-f1c71fe529d0"). InnerVolumeSpecName "kube-api-access-b64r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.754003 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c661b9d0-ba17-41d2-94dd-f1c71fe529d0" (UID: "c661b9d0-ba17-41d2-94dd-f1c71fe529d0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849305 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mstk5\" (UniqueName: \"kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5\") pod \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849362 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca\") pod \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849414 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert\") pod \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849449 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config\") pod \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\" (UID: \"bbe6c3a1-1534-4095-9e25-1f4ce093938e\") " Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849670 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849690 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849700 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849711 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b64r7\" (UniqueName: \"kubernetes.io/projected/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-kube-api-access-b64r7\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.849719 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c661b9d0-ba17-41d2-94dd-f1c71fe529d0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.850340 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config" (OuterVolumeSpecName: "config") pod "bbe6c3a1-1534-4095-9e25-1f4ce093938e" (UID: "bbe6c3a1-1534-4095-9e25-1f4ce093938e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.850371 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bbe6c3a1-1534-4095-9e25-1f4ce093938e" (UID: "bbe6c3a1-1534-4095-9e25-1f4ce093938e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.963092 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bbe6c3a1-1534-4095-9e25-1f4ce093938e" (UID: "bbe6c3a1-1534-4095-9e25-1f4ce093938e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.963209 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5" (OuterVolumeSpecName: "kube-api-access-mstk5") pod "bbe6c3a1-1534-4095-9e25-1f4ce093938e" (UID: "bbe6c3a1-1534-4095-9e25-1f4ce093938e"). InnerVolumeSpecName "kube-api-access-mstk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.963248 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe6c3a1-1534-4095-9e25-1f4ce093938e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.963280 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:46 crc kubenswrapper[5000]: I0105 21:39:46.963291 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe6c3a1-1534-4095-9e25-1f4ce093938e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.064337 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mstk5\" (UniqueName: \"kubernetes.io/projected/bbe6c3a1-1534-4095-9e25-1f4ce093938e-kube-api-access-mstk5\") on node \"crc\" DevicePath \"\"" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.332972 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:39:47 crc kubenswrapper[5000]: E0105 21:39:47.333178 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" containerName="installer" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333191 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" containerName="installer" Jan 05 21:39:47 crc kubenswrapper[5000]: E0105 21:39:47.333205 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333211 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:39:47 crc kubenswrapper[5000]: E0105 21:39:47.333223 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" containerName="controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333229 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" containerName="controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: E0105 21:39:47.333240 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerName="route-controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333246 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerName="route-controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333338 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" containerName="controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333365 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2199f70b-6ba2-4e30-8e73-7eb7fc512d37" containerName="installer" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333379 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" containerName="route-controller-manager" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333390 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.333772 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.338400 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.339755 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.351252 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.360702 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.484641 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.484971 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485124 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485279 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kf7\" (UniqueName: \"kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485415 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485542 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkv9\" (UniqueName: \"kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485682 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485788 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.485922 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.543487 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" event={"ID":"bbe6c3a1-1534-4095-9e25-1f4ce093938e","Type":"ContainerDied","Data":"52b9e31bce1cc59a27ddd112783506b1edcd5f9243ef2dc48cc4c92f6550bba3"} Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.543508 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.543560 5000 scope.go:117] "RemoveContainer" containerID="bd405d30ee6bff63ae848c7ad9bdfd880f9a3294acc48fdc11ec2dfc8c8753ac" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.545053 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" event={"ID":"c661b9d0-ba17-41d2-94dd-f1c71fe529d0","Type":"ContainerDied","Data":"cc84d78b788adf114ef8aef7b00706fd5c69a54e36075e9173b37de6e218d12c"} Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.545087 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d5n4f" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.558739 5000 scope.go:117] "RemoveContainer" containerID="6e8b9a2523ea6996d59527c4d44c62e033ef03c43c74d80503b737fda6e10e34" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.559766 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.565146 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d5n4f"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.569831 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.572913 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mr825"] Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586794 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586846 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkkv9\" (UniqueName: \"kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586925 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586944 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586960 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586975 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.586998 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.587020 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.587040 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kf7\" (UniqueName: \"kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.588135 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.588318 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.588422 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.588580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.588873 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.592548 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.603520 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.613103 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kf7\" (UniqueName: \"kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7\") pod \"route-controller-manager-5474b5bbd7-pk6lx\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.620366 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkkv9\" (UniqueName: \"kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9\") pod \"controller-manager-7568f5d7c4-9tprv\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.652485 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.695466 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.817051 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:39:47 crc kubenswrapper[5000]: W0105 21:39:47.826096 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf74fb3bf_8149_4b45_adc7_6213a99e0f13.slice/crio-8957707d0a981425e3da284b1e6434237846c4aaf9322e7ab0dac9055e3ab176 WatchSource:0}: Error finding container 8957707d0a981425e3da284b1e6434237846c4aaf9322e7ab0dac9055e3ab176: Status 404 returned error can't find the container with id 8957707d0a981425e3da284b1e6434237846c4aaf9322e7ab0dac9055e3ab176 Jan 05 21:39:47 crc kubenswrapper[5000]: I0105 21:39:47.892781 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:39:47 crc kubenswrapper[5000]: W0105 21:39:47.903115 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1efcef32_dfa0_4af5_b88d_33eb5abcb442.slice/crio-7125168c12a46cab771f4e33884a461e68670640e28849bb3f8a8d787b46ac7f WatchSource:0}: Error finding container 7125168c12a46cab771f4e33884a461e68670640e28849bb3f8a8d787b46ac7f: Status 404 returned error can't find the container with id 7125168c12a46cab771f4e33884a461e68670640e28849bb3f8a8d787b46ac7f Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.551196 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" event={"ID":"f74fb3bf-8149-4b45-adc7-6213a99e0f13","Type":"ContainerStarted","Data":"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6"} Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.551499 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.551512 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" event={"ID":"f74fb3bf-8149-4b45-adc7-6213a99e0f13","Type":"ContainerStarted","Data":"8957707d0a981425e3da284b1e6434237846c4aaf9322e7ab0dac9055e3ab176"} Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.555830 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" event={"ID":"1efcef32-dfa0-4af5-b88d-33eb5abcb442","Type":"ContainerStarted","Data":"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984"} Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.555868 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" event={"ID":"1efcef32-dfa0-4af5-b88d-33eb5abcb442","Type":"ContainerStarted","Data":"7125168c12a46cab771f4e33884a461e68670640e28849bb3f8a8d787b46ac7f"} Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.556140 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.558272 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.561097 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.579250 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" podStartSLOduration=2.579230604 podStartE2EDuration="2.579230604s" podCreationTimestamp="2026-01-05 21:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:39:48.566828848 +0000 UTC m=+343.523031317" watchObservedRunningTime="2026-01-05 21:39:48.579230604 +0000 UTC m=+343.535433093" Jan 05 21:39:48 crc kubenswrapper[5000]: I0105 21:39:48.596406 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" podStartSLOduration=2.596387664 podStartE2EDuration="2.596387664s" podCreationTimestamp="2026-01-05 21:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:39:48.59558497 +0000 UTC m=+343.551787459" watchObservedRunningTime="2026-01-05 21:39:48.596387664 +0000 UTC m=+343.552590133" Jan 05 21:39:49 crc kubenswrapper[5000]: I0105 21:39:49.330779 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe6c3a1-1534-4095-9e25-1f4ce093938e" path="/var/lib/kubelet/pods/bbe6c3a1-1534-4095-9e25-1f4ce093938e/volumes" Jan 05 21:39:49 crc kubenswrapper[5000]: I0105 21:39:49.331351 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c661b9d0-ba17-41d2-94dd-f1c71fe529d0" path="/var/lib/kubelet/pods/c661b9d0-ba17-41d2-94dd-f1c71fe529d0/volumes" Jan 05 21:39:53 crc kubenswrapper[5000]: I0105 21:39:53.099103 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:39:53 crc kubenswrapper[5000]: I0105 21:39:53.099401 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.248805 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54c86"] Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.251116 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.254111 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.259705 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54c86"] Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.419393 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshcc\" (UniqueName: \"kubernetes.io/projected/8ac8e069-4823-418e-be56-ec272b979420-kube-api-access-cshcc\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.419533 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-catalog-content\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.419596 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-utilities\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.520636 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-catalog-content\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.520776 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-utilities\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.520853 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cshcc\" (UniqueName: \"kubernetes.io/projected/8ac8e069-4823-418e-be56-ec272b979420-kube-api-access-cshcc\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.521255 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-utilities\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.521251 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac8e069-4823-418e-be56-ec272b979420-catalog-content\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.548940 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cshcc\" (UniqueName: \"kubernetes.io/projected/8ac8e069-4823-418e-be56-ec272b979420-kube-api-access-cshcc\") pod \"community-operators-54c86\" (UID: \"8ac8e069-4823-418e-be56-ec272b979420\") " pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:58 crc kubenswrapper[5000]: I0105 21:39:58.568482 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:39:59 crc kubenswrapper[5000]: I0105 21:39:59.152529 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54c86"] Jan 05 21:39:59 crc kubenswrapper[5000]: I0105 21:39:59.608686 5000 generic.go:334] "Generic (PLEG): container finished" podID="8ac8e069-4823-418e-be56-ec272b979420" containerID="7a17ece3ff56ba943773e702649bf2c596da50ff4f5dae81524b7a94d6ffc0ee" exitCode=0 Jan 05 21:39:59 crc kubenswrapper[5000]: I0105 21:39:59.608720 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54c86" event={"ID":"8ac8e069-4823-418e-be56-ec272b979420","Type":"ContainerDied","Data":"7a17ece3ff56ba943773e702649bf2c596da50ff4f5dae81524b7a94d6ffc0ee"} Jan 05 21:39:59 crc kubenswrapper[5000]: I0105 21:39:59.608764 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54c86" event={"ID":"8ac8e069-4823-418e-be56-ec272b979420","Type":"ContainerStarted","Data":"cc36479aca5598e3dcb05847028dac1caddf7e7f8386a93255224243bf86273d"} Jan 05 21:40:00 crc kubenswrapper[5000]: I0105 21:40:00.847795 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c5kv5"] Jan 05 21:40:00 crc kubenswrapper[5000]: I0105 21:40:00.849083 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:00 crc kubenswrapper[5000]: I0105 21:40:00.851275 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 05 21:40:00 crc kubenswrapper[5000]: I0105 21:40:00.856640 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5kv5"] Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.051368 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-catalog-content\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.051482 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kqr\" (UniqueName: \"kubernetes.io/projected/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-kube-api-access-v4kqr\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.051569 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-utilities\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.152679 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-catalog-content\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.153008 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kqr\" (UniqueName: \"kubernetes.io/projected/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-kube-api-access-v4kqr\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.153152 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-utilities\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.153530 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-catalog-content\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.153620 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-utilities\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.174185 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kqr\" (UniqueName: \"kubernetes.io/projected/928d6f47-cdd2-4d32-a807-f94d9cbc05cb-kube-api-access-v4kqr\") pod \"redhat-marketplace-c5kv5\" (UID: \"928d6f47-cdd2-4d32-a807-f94d9cbc05cb\") " pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.466291 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.623491 5000 generic.go:334] "Generic (PLEG): container finished" podID="8ac8e069-4823-418e-be56-ec272b979420" containerID="59ecd29ef38576c7659f0a3ca2738f058c03cf0e3b796dc3888fa73350ee9842" exitCode=0 Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.623533 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54c86" event={"ID":"8ac8e069-4823-418e-be56-ec272b979420","Type":"ContainerDied","Data":"59ecd29ef38576c7659f0a3ca2738f058c03cf0e3b796dc3888fa73350ee9842"} Jan 05 21:40:01 crc kubenswrapper[5000]: I0105 21:40:01.905014 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c5kv5"] Jan 05 21:40:01 crc kubenswrapper[5000]: W0105 21:40:01.911863 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod928d6f47_cdd2_4d32_a807_f94d9cbc05cb.slice/crio-a482744651d8df927533aad76c0839f54b6d5a757f364cca0b599bcb7cf89a31 WatchSource:0}: Error finding container a482744651d8df927533aad76c0839f54b6d5a757f364cca0b599bcb7cf89a31: Status 404 returned error can't find the container with id a482744651d8df927533aad76c0839f54b6d5a757f364cca0b599bcb7cf89a31 Jan 05 21:40:02 crc kubenswrapper[5000]: I0105 21:40:02.630656 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54c86" event={"ID":"8ac8e069-4823-418e-be56-ec272b979420","Type":"ContainerStarted","Data":"88efee54f01967f8e8c2c9487566ba12797f3d06016ff3fb50dbf86d6d58be2b"} Jan 05 21:40:02 crc kubenswrapper[5000]: I0105 21:40:02.631945 5000 generic.go:334] "Generic (PLEG): container finished" podID="928d6f47-cdd2-4d32-a807-f94d9cbc05cb" containerID="7fcd0d256980a9fc90b61c651682a557916af21f57deb42399fd5b8ce528d607" exitCode=0 Jan 05 21:40:02 crc kubenswrapper[5000]: I0105 21:40:02.631993 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5kv5" event={"ID":"928d6f47-cdd2-4d32-a807-f94d9cbc05cb","Type":"ContainerDied","Data":"7fcd0d256980a9fc90b61c651682a557916af21f57deb42399fd5b8ce528d607"} Jan 05 21:40:02 crc kubenswrapper[5000]: I0105 21:40:02.632045 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5kv5" event={"ID":"928d6f47-cdd2-4d32-a807-f94d9cbc05cb","Type":"ContainerStarted","Data":"a482744651d8df927533aad76c0839f54b6d5a757f364cca0b599bcb7cf89a31"} Jan 05 21:40:02 crc kubenswrapper[5000]: I0105 21:40:02.646195 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54c86" podStartSLOduration=2.056594329 podStartE2EDuration="4.646179883s" podCreationTimestamp="2026-01-05 21:39:58 +0000 UTC" firstStartedPulling="2026-01-05 21:39:59.609945249 +0000 UTC m=+354.566147718" lastFinishedPulling="2026-01-05 21:40:02.199530803 +0000 UTC m=+357.155733272" observedRunningTime="2026-01-05 21:40:02.642605385 +0000 UTC m=+357.598807864" watchObservedRunningTime="2026-01-05 21:40:02.646179883 +0000 UTC m=+357.602382352" Jan 05 21:40:03 crc kubenswrapper[5000]: I0105 21:40:03.637750 5000 generic.go:334] "Generic (PLEG): container finished" podID="928d6f47-cdd2-4d32-a807-f94d9cbc05cb" containerID="e644c0641b19100fb4f8cdf59598c23ac3bbb740802034abba70b8790fb0ae22" exitCode=0 Jan 05 21:40:03 crc kubenswrapper[5000]: I0105 21:40:03.638456 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5kv5" event={"ID":"928d6f47-cdd2-4d32-a807-f94d9cbc05cb","Type":"ContainerDied","Data":"e644c0641b19100fb4f8cdf59598c23ac3bbb740802034abba70b8790fb0ae22"} Jan 05 21:40:04 crc kubenswrapper[5000]: I0105 21:40:04.644752 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c5kv5" event={"ID":"928d6f47-cdd2-4d32-a807-f94d9cbc05cb","Type":"ContainerStarted","Data":"23a25b1e21f2c4d587a2019723c8aaec0eacbe4dd53a6c2afce0b76174b71681"} Jan 05 21:40:04 crc kubenswrapper[5000]: I0105 21:40:04.663771 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c5kv5" podStartSLOduration=3.126965718 podStartE2EDuration="4.663753445s" podCreationTimestamp="2026-01-05 21:40:00 +0000 UTC" firstStartedPulling="2026-01-05 21:40:02.633123517 +0000 UTC m=+357.589325986" lastFinishedPulling="2026-01-05 21:40:04.169911244 +0000 UTC m=+359.126113713" observedRunningTime="2026-01-05 21:40:04.663042403 +0000 UTC m=+359.619244872" watchObservedRunningTime="2026-01-05 21:40:04.663753445 +0000 UTC m=+359.619955904" Jan 05 21:40:08 crc kubenswrapper[5000]: I0105 21:40:08.568830 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:40:08 crc kubenswrapper[5000]: I0105 21:40:08.569447 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:40:08 crc kubenswrapper[5000]: I0105 21:40:08.633757 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:40:08 crc kubenswrapper[5000]: I0105 21:40:08.698990 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54c86" Jan 05 21:40:11 crc kubenswrapper[5000]: I0105 21:40:11.466970 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:11 crc kubenswrapper[5000]: I0105 21:40:11.467577 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:11 crc kubenswrapper[5000]: I0105 21:40:11.505865 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:11 crc kubenswrapper[5000]: I0105 21:40:11.714164 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c5kv5" Jan 05 21:40:23 crc kubenswrapper[5000]: I0105 21:40:23.098724 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:40:23 crc kubenswrapper[5000]: I0105 21:40:23.099183 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.262577 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ppm9"] Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.263760 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.333497 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ppm9"] Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.422857 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.422938 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.422971 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-bound-sa-token\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.423004 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-tls\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.423027 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.423045 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-certificates\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.423065 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-trusted-ca\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.423092 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8mdm\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-kube-api-access-z8mdm\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.446295 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.525192 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-tls\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.525297 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.525321 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-certificates\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.525343 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-trusted-ca\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.525401 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8mdm\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-kube-api-access-z8mdm\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.526015 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.526592 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.526916 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-bound-sa-token\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.528448 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-trusted-ca\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.529562 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-certificates\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.534576 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.535167 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-registry-tls\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.542528 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8mdm\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-kube-api-access-z8mdm\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.546437 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf-bound-sa-token\") pod \"image-registry-66df7c8f76-9ppm9\" (UID: \"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:24 crc kubenswrapper[5000]: I0105 21:40:24.579959 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:25 crc kubenswrapper[5000]: I0105 21:40:24.998562 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ppm9"] Jan 05 21:40:25 crc kubenswrapper[5000]: W0105 21:40:25.011718 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a0c1d2f_1179_4d4d_bd4b_6c878e2e9eaf.slice/crio-45521622e14b0eaa47800e2f2a3e5f8a06b8668d99914de1d779897c86785efa WatchSource:0}: Error finding container 45521622e14b0eaa47800e2f2a3e5f8a06b8668d99914de1d779897c86785efa: Status 404 returned error can't find the container with id 45521622e14b0eaa47800e2f2a3e5f8a06b8668d99914de1d779897c86785efa Jan 05 21:40:25 crc kubenswrapper[5000]: I0105 21:40:25.747746 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" event={"ID":"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf","Type":"ContainerStarted","Data":"91ed721f4c7c5a9578dd357c3d72923db8830eecf74870ae1001b2150ac5b518"} Jan 05 21:40:25 crc kubenswrapper[5000]: I0105 21:40:25.748752 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" event={"ID":"8a0c1d2f-1179-4d4d-bd4b-6c878e2e9eaf","Type":"ContainerStarted","Data":"45521622e14b0eaa47800e2f2a3e5f8a06b8668d99914de1d779897c86785efa"} Jan 05 21:40:25 crc kubenswrapper[5000]: I0105 21:40:25.758036 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:25 crc kubenswrapper[5000]: I0105 21:40:25.777817 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" podStartSLOduration=1.777801994 podStartE2EDuration="1.777801994s" podCreationTimestamp="2026-01-05 21:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:40:25.776521525 +0000 UTC m=+380.732723994" watchObservedRunningTime="2026-01-05 21:40:25.777801994 +0000 UTC m=+380.734004453" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.418566 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.418992 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerName="route-controller-manager" containerID="cri-o://2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6" gracePeriod=30 Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.739116 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.762724 5000 generic.go:334] "Generic (PLEG): container finished" podID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerID="2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6" exitCode=0 Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.763416 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.763740 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" event={"ID":"f74fb3bf-8149-4b45-adc7-6213a99e0f13","Type":"ContainerDied","Data":"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6"} Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.763766 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" event={"ID":"f74fb3bf-8149-4b45-adc7-6213a99e0f13","Type":"ContainerDied","Data":"8957707d0a981425e3da284b1e6434237846c4aaf9322e7ab0dac9055e3ab176"} Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.763782 5000 scope.go:117] "RemoveContainer" containerID="2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.782029 5000 scope.go:117] "RemoveContainer" containerID="2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6" Jan 05 21:40:27 crc kubenswrapper[5000]: E0105 21:40:27.782824 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6\": container with ID starting with 2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6 not found: ID does not exist" containerID="2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.782865 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6"} err="failed to get container status \"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6\": rpc error: code = NotFound desc = could not find container \"2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6\": container with ID starting with 2da58d4f11a8ff6db14e0292d90f6cbd926d4cf1a7a2103fcb21b02c80dcc5c6 not found: ID does not exist" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.868380 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5kf7\" (UniqueName: \"kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7\") pod \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.868487 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config\") pod \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.868520 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert\") pod \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.868573 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca\") pod \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\" (UID: \"f74fb3bf-8149-4b45-adc7-6213a99e0f13\") " Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.869543 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config" (OuterVolumeSpecName: "config") pod "f74fb3bf-8149-4b45-adc7-6213a99e0f13" (UID: "f74fb3bf-8149-4b45-adc7-6213a99e0f13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.870046 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca" (OuterVolumeSpecName: "client-ca") pod "f74fb3bf-8149-4b45-adc7-6213a99e0f13" (UID: "f74fb3bf-8149-4b45-adc7-6213a99e0f13"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.873555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7" (OuterVolumeSpecName: "kube-api-access-g5kf7") pod "f74fb3bf-8149-4b45-adc7-6213a99e0f13" (UID: "f74fb3bf-8149-4b45-adc7-6213a99e0f13"). InnerVolumeSpecName "kube-api-access-g5kf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.877406 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f74fb3bf-8149-4b45-adc7-6213a99e0f13" (UID: "f74fb3bf-8149-4b45-adc7-6213a99e0f13"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.969619 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.969664 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f74fb3bf-8149-4b45-adc7-6213a99e0f13-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.969676 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f74fb3bf-8149-4b45-adc7-6213a99e0f13-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:27 crc kubenswrapper[5000]: I0105 21:40:27.969688 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5kf7\" (UniqueName: \"kubernetes.io/projected/f74fb3bf-8149-4b45-adc7-6213a99e0f13-kube-api-access-g5kf7\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:28 crc kubenswrapper[5000]: I0105 21:40:28.093007 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:40:28 crc kubenswrapper[5000]: I0105 21:40:28.096808 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx"] Jan 05 21:40:28 crc kubenswrapper[5000]: I0105 21:40:28.654067 5000 patch_prober.go:28] interesting pod/route-controller-manager-5474b5bbd7-pk6lx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:40:28 crc kubenswrapper[5000]: I0105 21:40:28.654376 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5474b5bbd7-pk6lx" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.329637 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" path="/var/lib/kubelet/pods/f74fb3bf-8149-4b45-adc7-6213a99e0f13/volumes" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.357105 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s"] Jan 05 21:40:29 crc kubenswrapper[5000]: E0105 21:40:29.357343 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerName="route-controller-manager" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.357364 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerName="route-controller-manager" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.357473 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74fb3bf-8149-4b45-adc7-6213a99e0f13" containerName="route-controller-manager" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.357853 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.359875 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.361692 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.361863 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.362024 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.362158 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.362909 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.366553 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s"] Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.489690 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d096b6fe-bad7-410a-b43f-d55b194cb04f-serving-cert\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.489972 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-config\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.490047 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99dr\" (UniqueName: \"kubernetes.io/projected/d096b6fe-bad7-410a-b43f-d55b194cb04f-kube-api-access-s99dr\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.490165 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-client-ca\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.591633 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-config\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.591924 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99dr\" (UniqueName: \"kubernetes.io/projected/d096b6fe-bad7-410a-b43f-d55b194cb04f-kube-api-access-s99dr\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.592028 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-client-ca\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.592172 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d096b6fe-bad7-410a-b43f-d55b194cb04f-serving-cert\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.594259 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-config\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.603226 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d096b6fe-bad7-410a-b43f-d55b194cb04f-client-ca\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.603431 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d096b6fe-bad7-410a-b43f-d55b194cb04f-serving-cert\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.608661 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99dr\" (UniqueName: \"kubernetes.io/projected/d096b6fe-bad7-410a-b43f-d55b194cb04f-kube-api-access-s99dr\") pod \"route-controller-manager-665c57ff98-5hr5s\" (UID: \"d096b6fe-bad7-410a-b43f-d55b194cb04f\") " pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.672139 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:29 crc kubenswrapper[5000]: I0105 21:40:29.836964 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s"] Jan 05 21:40:29 crc kubenswrapper[5000]: W0105 21:40:29.846419 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd096b6fe_bad7_410a_b43f_d55b194cb04f.slice/crio-f12a5d8efea5d6fcc69967ff59f20240169d1651d51a053ba7e0572f72521fca WatchSource:0}: Error finding container f12a5d8efea5d6fcc69967ff59f20240169d1651d51a053ba7e0572f72521fca: Status 404 returned error can't find the container with id f12a5d8efea5d6fcc69967ff59f20240169d1651d51a053ba7e0572f72521fca Jan 05 21:40:30 crc kubenswrapper[5000]: I0105 21:40:30.779680 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" event={"ID":"d096b6fe-bad7-410a-b43f-d55b194cb04f","Type":"ContainerStarted","Data":"1880c743356b4e70775f37669df4e8ef039135db432a1576a09e77442d2e94d0"} Jan 05 21:40:30 crc kubenswrapper[5000]: I0105 21:40:30.780034 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" event={"ID":"d096b6fe-bad7-410a-b43f-d55b194cb04f","Type":"ContainerStarted","Data":"f12a5d8efea5d6fcc69967ff59f20240169d1651d51a053ba7e0572f72521fca"} Jan 05 21:40:30 crc kubenswrapper[5000]: I0105 21:40:30.780062 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:30 crc kubenswrapper[5000]: I0105 21:40:30.786729 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" Jan 05 21:40:30 crc kubenswrapper[5000]: I0105 21:40:30.795984 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-665c57ff98-5hr5s" podStartSLOduration=3.795963269 podStartE2EDuration="3.795963269s" podCreationTimestamp="2026-01-05 21:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:40:30.794270758 +0000 UTC m=+385.750473227" watchObservedRunningTime="2026-01-05 21:40:30.795963269 +0000 UTC m=+385.752165738" Jan 05 21:40:44 crc kubenswrapper[5000]: I0105 21:40:44.585184 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9ppm9" Jan 05 21:40:44 crc kubenswrapper[5000]: I0105 21:40:44.636715 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.414066 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.414760 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerName="controller-manager" containerID="cri-o://c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984" gracePeriod=30 Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.751172 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.864584 5000 generic.go:334] "Generic (PLEG): container finished" podID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerID="c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984" exitCode=0 Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.864645 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.864683 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" event={"ID":"1efcef32-dfa0-4af5-b88d-33eb5abcb442","Type":"ContainerDied","Data":"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984"} Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.864934 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" event={"ID":"1efcef32-dfa0-4af5-b88d-33eb5abcb442","Type":"ContainerDied","Data":"7125168c12a46cab771f4e33884a461e68670640e28849bb3f8a8d787b46ac7f"} Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.864954 5000 scope.go:117] "RemoveContainer" containerID="c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.878405 5000 scope.go:117] "RemoveContainer" containerID="c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984" Jan 05 21:40:47 crc kubenswrapper[5000]: E0105 21:40:47.878739 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984\": container with ID starting with c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984 not found: ID does not exist" containerID="c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.878780 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984"} err="failed to get container status \"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984\": rpc error: code = NotFound desc = could not find container \"c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984\": container with ID starting with c19eceb5e3d895108818aaaefdb4cd296ab4b306a9387222f2627693ad977984 not found: ID does not exist" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.944757 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca\") pod \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.944837 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkkv9\" (UniqueName: \"kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9\") pod \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.944857 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert\") pod \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.944885 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config\") pod \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.945565 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca" (OuterVolumeSpecName: "client-ca") pod "1efcef32-dfa0-4af5-b88d-33eb5abcb442" (UID: "1efcef32-dfa0-4af5-b88d-33eb5abcb442"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.945654 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config" (OuterVolumeSpecName: "config") pod "1efcef32-dfa0-4af5-b88d-33eb5abcb442" (UID: "1efcef32-dfa0-4af5-b88d-33eb5abcb442"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.945794 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles\") pod \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\" (UID: \"1efcef32-dfa0-4af5-b88d-33eb5abcb442\") " Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.946244 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1efcef32-dfa0-4af5-b88d-33eb5abcb442" (UID: "1efcef32-dfa0-4af5-b88d-33eb5abcb442"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.946265 5000 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-client-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.946303 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.949878 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1efcef32-dfa0-4af5-b88d-33eb5abcb442" (UID: "1efcef32-dfa0-4af5-b88d-33eb5abcb442"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:40:47 crc kubenswrapper[5000]: I0105 21:40:47.950312 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9" (OuterVolumeSpecName: "kube-api-access-tkkv9") pod "1efcef32-dfa0-4af5-b88d-33eb5abcb442" (UID: "1efcef32-dfa0-4af5-b88d-33eb5abcb442"). InnerVolumeSpecName "kube-api-access-tkkv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.047038 5000 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1efcef32-dfa0-4af5-b88d-33eb5abcb442-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.047081 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkkv9\" (UniqueName: \"kubernetes.io/projected/1efcef32-dfa0-4af5-b88d-33eb5abcb442-kube-api-access-tkkv9\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.047093 5000 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1efcef32-dfa0-4af5-b88d-33eb5abcb442-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.194976 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.198361 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7568f5d7c4-9tprv"] Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.696870 5000 patch_prober.go:28] interesting pod/controller-manager-7568f5d7c4-9tprv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 05 21:40:48 crc kubenswrapper[5000]: I0105 21:40:48.696972 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7568f5d7c4-9tprv" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.334340 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" path="/var/lib/kubelet/pods/1efcef32-dfa0-4af5-b88d-33eb5abcb442/volumes" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.375603 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b55dc9975-x4257"] Jan 05 21:40:49 crc kubenswrapper[5000]: E0105 21:40:49.375850 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerName="controller-manager" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.375927 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerName="controller-manager" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.376695 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1efcef32-dfa0-4af5-b88d-33eb5abcb442" containerName="controller-manager" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.377301 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.385670 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b55dc9975-x4257"] Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.420835 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.421011 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.421187 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.421206 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.421446 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.421729 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.429631 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.566442 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-client-ca\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.566618 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pjp\" (UniqueName: \"kubernetes.io/projected/15b76009-3c42-43cf-a69a-bd15c7179588-kube-api-access-d7pjp\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.566750 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-config\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.566791 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-proxy-ca-bundles\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.566843 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b76009-3c42-43cf-a69a-bd15c7179588-serving-cert\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.667712 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-proxy-ca-bundles\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.667818 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b76009-3c42-43cf-a69a-bd15c7179588-serving-cert\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.667852 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-client-ca\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.667883 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7pjp\" (UniqueName: \"kubernetes.io/projected/15b76009-3c42-43cf-a69a-bd15c7179588-kube-api-access-d7pjp\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.667928 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-config\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.669378 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-client-ca\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.669455 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-config\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.669996 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15b76009-3c42-43cf-a69a-bd15c7179588-proxy-ca-bundles\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.673915 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b76009-3c42-43cf-a69a-bd15c7179588-serving-cert\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.698250 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7pjp\" (UniqueName: \"kubernetes.io/projected/15b76009-3c42-43cf-a69a-bd15c7179588-kube-api-access-d7pjp\") pod \"controller-manager-6b55dc9975-x4257\" (UID: \"15b76009-3c42-43cf-a69a-bd15c7179588\") " pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:49 crc kubenswrapper[5000]: I0105 21:40:49.739143 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.145567 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b55dc9975-x4257"] Jan 05 21:40:50 crc kubenswrapper[5000]: W0105 21:40:50.151225 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b76009_3c42_43cf_a69a_bd15c7179588.slice/crio-08f080ee4846f6b771a99fa546d51682b41a48654336d82ea816634d08216968 WatchSource:0}: Error finding container 08f080ee4846f6b771a99fa546d51682b41a48654336d82ea816634d08216968: Status 404 returned error can't find the container with id 08f080ee4846f6b771a99fa546d51682b41a48654336d82ea816634d08216968 Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.892250 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" event={"ID":"15b76009-3c42-43cf-a69a-bd15c7179588","Type":"ContainerStarted","Data":"b720dfcea0f2b9ad02f9827206efa16bc997570c08a1f152b1674030cf354775"} Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.892609 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" event={"ID":"15b76009-3c42-43cf-a69a-bd15c7179588","Type":"ContainerStarted","Data":"08f080ee4846f6b771a99fa546d51682b41a48654336d82ea816634d08216968"} Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.893472 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.897288 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" Jan 05 21:40:50 crc kubenswrapper[5000]: I0105 21:40:50.915750 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b55dc9975-x4257" podStartSLOduration=3.915721538 podStartE2EDuration="3.915721538s" podCreationTimestamp="2026-01-05 21:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:40:50.910497549 +0000 UTC m=+405.866700038" watchObservedRunningTime="2026-01-05 21:40:50.915721538 +0000 UTC m=+405.871924017" Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.099726 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.100136 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.100183 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.100874 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.100975 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df" gracePeriod=600 Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.910005 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df" exitCode=0 Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.910078 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df"} Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.910241 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c"} Jan 05 21:40:53 crc kubenswrapper[5000]: I0105 21:40:53.910267 5000 scope.go:117] "RemoveContainer" containerID="d2c6ebb9a7f0e78c0b659e3d2105b8ad7e3a2e3606c29310e148be970c090222" Jan 05 21:41:09 crc kubenswrapper[5000]: I0105 21:41:09.677883 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" podUID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" containerName="registry" containerID="cri-o://c7fa82335a1127fdf8e30c7fe3f1f5ec8f4f3fc49bbe416ffb0829694f98b1be" gracePeriod=30 Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:09.999316 5000 generic.go:334] "Generic (PLEG): container finished" podID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" containerID="c7fa82335a1127fdf8e30c7fe3f1f5ec8f4f3fc49bbe416ffb0829694f98b1be" exitCode=0 Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:09.999390 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" event={"ID":"494f7900-b32c-47c4-8f3b-33dc5a054a7c","Type":"ContainerDied","Data":"c7fa82335a1127fdf8e30c7fe3f1f5ec8f4f3fc49bbe416ffb0829694f98b1be"} Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.191070 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.336964 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.337130 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.337193 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.338014 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.338227 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.338260 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.338311 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j26zk\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.338501 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.339314 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca\") pod \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\" (UID: \"494f7900-b32c-47c4-8f3b-33dc5a054a7c\") " Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.339759 5000 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.340108 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.342967 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.343258 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.343349 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.345415 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk" (OuterVolumeSpecName: "kube-api-access-j26zk") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "kube-api-access-j26zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.351946 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.358733 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "494f7900-b32c-47c4-8f3b-33dc5a054a7c" (UID: "494f7900-b32c-47c4-8f3b-33dc5a054a7c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441210 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494f7900-b32c-47c4-8f3b-33dc5a054a7c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441557 5000 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/494f7900-b32c-47c4-8f3b-33dc5a054a7c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441583 5000 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441606 5000 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/494f7900-b32c-47c4-8f3b-33dc5a054a7c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441630 5000 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:10 crc kubenswrapper[5000]: I0105 21:41:10.441651 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j26zk\" (UniqueName: \"kubernetes.io/projected/494f7900-b32c-47c4-8f3b-33dc5a054a7c-kube-api-access-j26zk\") on node \"crc\" DevicePath \"\"" Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.007257 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" event={"ID":"494f7900-b32c-47c4-8f3b-33dc5a054a7c","Type":"ContainerDied","Data":"e2f692e899bb9a015c76c97726934991c62da0f43fe79662b222af9d347ca533"} Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.007301 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w4mfk" Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.007337 5000 scope.go:117] "RemoveContainer" containerID="c7fa82335a1127fdf8e30c7fe3f1f5ec8f4f3fc49bbe416ffb0829694f98b1be" Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.042866 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.047243 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w4mfk"] Jan 05 21:41:11 crc kubenswrapper[5000]: I0105 21:41:11.332433 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" path="/var/lib/kubelet/pods/494f7900-b32c-47c4-8f3b-33dc5a054a7c/volumes" Jan 05 21:42:53 crc kubenswrapper[5000]: I0105 21:42:53.099284 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:42:53 crc kubenswrapper[5000]: I0105 21:42:53.101023 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:43:05 crc kubenswrapper[5000]: I0105 21:43:05.519127 5000 scope.go:117] "RemoveContainer" containerID="f1cd5e6d60c1a9cb54d2334a956b33afcc098a17cd359e001ee1a0a993ce0d6a" Jan 05 21:43:23 crc kubenswrapper[5000]: I0105 21:43:23.099332 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:43:23 crc kubenswrapper[5000]: I0105 21:43:23.100011 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.967462 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l"] Jan 05 21:43:49 crc kubenswrapper[5000]: E0105 21:43:49.968196 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" containerName="registry" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.968207 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" containerName="registry" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.968314 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="494f7900-b32c-47c4-8f3b-33dc5a054a7c" containerName="registry" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.968662 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.972080 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.977242 5000 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wkqsr" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.977255 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-d7hcb"] Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.990125 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 05 21:43:49 crc kubenswrapper[5000]: I0105 21:43:49.993149 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-d7hcb" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.000182 5000 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s4kg8" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.003322 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.018614 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-d7hcb"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.024722 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pgdwz"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.025537 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.027081 5000 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mgpc6" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.039472 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pgdwz"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.059256 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4g5\" (UniqueName: \"kubernetes.io/projected/61ca53f0-4a50-4090-846e-cfe229006c13-kube-api-access-mf4g5\") pod \"cert-manager-webhook-687f57d79b-pgdwz\" (UID: \"61ca53f0-4a50-4090-846e-cfe229006c13\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.059335 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9d4z\" (UniqueName: \"kubernetes.io/projected/0edf1980-d816-4cf8-ac70-c0a92cb8ca7c-kube-api-access-t9d4z\") pod \"cert-manager-858654f9db-d7hcb\" (UID: \"0edf1980-d816-4cf8-ac70-c0a92cb8ca7c\") " pod="cert-manager/cert-manager-858654f9db-d7hcb" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.059396 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54dn\" (UniqueName: \"kubernetes.io/projected/e567f6b1-10dc-4a2a-9ebb-2837b486af32-kube-api-access-s54dn\") pod \"cert-manager-cainjector-cf98fcc89-mvh6l\" (UID: \"e567f6b1-10dc-4a2a-9ebb-2837b486af32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.160815 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54dn\" (UniqueName: \"kubernetes.io/projected/e567f6b1-10dc-4a2a-9ebb-2837b486af32-kube-api-access-s54dn\") pod \"cert-manager-cainjector-cf98fcc89-mvh6l\" (UID: \"e567f6b1-10dc-4a2a-9ebb-2837b486af32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.160907 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf4g5\" (UniqueName: \"kubernetes.io/projected/61ca53f0-4a50-4090-846e-cfe229006c13-kube-api-access-mf4g5\") pod \"cert-manager-webhook-687f57d79b-pgdwz\" (UID: \"61ca53f0-4a50-4090-846e-cfe229006c13\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.160974 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9d4z\" (UniqueName: \"kubernetes.io/projected/0edf1980-d816-4cf8-ac70-c0a92cb8ca7c-kube-api-access-t9d4z\") pod \"cert-manager-858654f9db-d7hcb\" (UID: \"0edf1980-d816-4cf8-ac70-c0a92cb8ca7c\") " pod="cert-manager/cert-manager-858654f9db-d7hcb" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.182751 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54dn\" (UniqueName: \"kubernetes.io/projected/e567f6b1-10dc-4a2a-9ebb-2837b486af32-kube-api-access-s54dn\") pod \"cert-manager-cainjector-cf98fcc89-mvh6l\" (UID: \"e567f6b1-10dc-4a2a-9ebb-2837b486af32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.182872 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9d4z\" (UniqueName: \"kubernetes.io/projected/0edf1980-d816-4cf8-ac70-c0a92cb8ca7c-kube-api-access-t9d4z\") pod \"cert-manager-858654f9db-d7hcb\" (UID: \"0edf1980-d816-4cf8-ac70-c0a92cb8ca7c\") " pod="cert-manager/cert-manager-858654f9db-d7hcb" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.183589 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf4g5\" (UniqueName: \"kubernetes.io/projected/61ca53f0-4a50-4090-846e-cfe229006c13-kube-api-access-mf4g5\") pod \"cert-manager-webhook-687f57d79b-pgdwz\" (UID: \"61ca53f0-4a50-4090-846e-cfe229006c13\") " pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.296707 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.343612 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.408826 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-d7hcb" Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.825509 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l"] Jan 05 21:43:50 crc kubenswrapper[5000]: W0105 21:43:50.829897 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode567f6b1_10dc_4a2a_9ebb_2837b486af32.slice/crio-da4d6344bb73d43b41d9ed70525cf2e1da92fd1c0b1e9a9d3b843334a1535a8f WatchSource:0}: Error finding container da4d6344bb73d43b41d9ed70525cf2e1da92fd1c0b1e9a9d3b843334a1535a8f: Status 404 returned error can't find the container with id da4d6344bb73d43b41d9ed70525cf2e1da92fd1c0b1e9a9d3b843334a1535a8f Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.831697 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.875185 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-pgdwz"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.878847 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-d7hcb"] Jan 05 21:43:50 crc kubenswrapper[5000]: I0105 21:43:50.883275 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" event={"ID":"e567f6b1-10dc-4a2a-9ebb-2837b486af32","Type":"ContainerStarted","Data":"da4d6344bb73d43b41d9ed70525cf2e1da92fd1c0b1e9a9d3b843334a1535a8f"} Jan 05 21:43:51 crc kubenswrapper[5000]: I0105 21:43:51.889154 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-d7hcb" event={"ID":"0edf1980-d816-4cf8-ac70-c0a92cb8ca7c","Type":"ContainerStarted","Data":"4d2f86bb274c0454c9bbbb269c9a3525b262d01fddb7d75c1263ea0b60886f34"} Jan 05 21:43:51 crc kubenswrapper[5000]: I0105 21:43:51.890020 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" event={"ID":"61ca53f0-4a50-4090-846e-cfe229006c13","Type":"ContainerStarted","Data":"cf6498d7580a13e1a058999807d29e1d045b56659a0043812ba12f1414acbf84"} Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.098713 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.099079 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.099123 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.099665 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.099721 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c" gracePeriod=600 Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.901360 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c" exitCode=0 Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.901407 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c"} Jan 05 21:43:53 crc kubenswrapper[5000]: I0105 21:43:53.901451 5000 scope.go:117] "RemoveContainer" containerID="ce647f76a2224015ddb59c7a18d4416444bb91c80f4ee4c3462325a8a5e9a2df" Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.908915 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf"} Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.909921 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-d7hcb" event={"ID":"0edf1980-d816-4cf8-ac70-c0a92cb8ca7c","Type":"ContainerStarted","Data":"1d7a6851aab54502b33615b3ba38e3d7a4f2411919a766471b1b4508bcdf9ac4"} Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.910985 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" event={"ID":"e567f6b1-10dc-4a2a-9ebb-2837b486af32","Type":"ContainerStarted","Data":"32b26ac2843bd1498e53fdad8bcc03206b56d5766a02f384cb184fb5f6931ef1"} Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.912084 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" event={"ID":"61ca53f0-4a50-4090-846e-cfe229006c13","Type":"ContainerStarted","Data":"a2706435920a08888c9f96eadf89d66ed4c757de5974ce18cd2a7346945a6769"} Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.912200 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.940194 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-d7hcb" podStartSLOduration=2.601260557 podStartE2EDuration="5.940167923s" podCreationTimestamp="2026-01-05 21:43:49 +0000 UTC" firstStartedPulling="2026-01-05 21:43:50.882958862 +0000 UTC m=+585.839161331" lastFinishedPulling="2026-01-05 21:43:54.221866228 +0000 UTC m=+589.178068697" observedRunningTime="2026-01-05 21:43:54.933665617 +0000 UTC m=+589.889868076" watchObservedRunningTime="2026-01-05 21:43:54.940167923 +0000 UTC m=+589.896370402" Jan 05 21:43:54 crc kubenswrapper[5000]: I0105 21:43:54.956102 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" podStartSLOduration=2.598640631 podStartE2EDuration="5.956079637s" podCreationTimestamp="2026-01-05 21:43:49 +0000 UTC" firstStartedPulling="2026-01-05 21:43:50.881323105 +0000 UTC m=+585.837525574" lastFinishedPulling="2026-01-05 21:43:54.238762121 +0000 UTC m=+589.194964580" observedRunningTime="2026-01-05 21:43:54.951681912 +0000 UTC m=+589.907884401" watchObservedRunningTime="2026-01-05 21:43:54.956079637 +0000 UTC m=+589.912282096" Jan 05 21:43:55 crc kubenswrapper[5000]: I0105 21:43:54.975344 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mvh6l" podStartSLOduration=2.583496299 podStartE2EDuration="5.975258065s" podCreationTimestamp="2026-01-05 21:43:49 +0000 UTC" firstStartedPulling="2026-01-05 21:43:50.831482771 +0000 UTC m=+585.787685240" lastFinishedPulling="2026-01-05 21:43:54.223244537 +0000 UTC m=+589.179447006" observedRunningTime="2026-01-05 21:43:54.965822256 +0000 UTC m=+589.922024725" watchObservedRunningTime="2026-01-05 21:43:54.975258065 +0000 UTC m=+589.931460534" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.591719 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f5k4c"] Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592640 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-controller" containerID="cri-o://45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592698 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="nbdb" containerID="cri-o://8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592750 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="northd" containerID="cri-o://e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592783 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592811 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-node" containerID="cri-o://51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.592839 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-acl-logging" containerID="cri-o://7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.593065 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="sbdb" containerID="cri-o://31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.619765 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" containerID="cri-o://fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" gracePeriod=30 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.883713 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/3.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.886855 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovn-acl-logging/0.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.887442 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovn-controller/0.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.887874 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932152 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zm2n7"] Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932409 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932425 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932436 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932444 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932457 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932466 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932475 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-acl-logging" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932482 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-acl-logging" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932494 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kubecfg-setup" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932502 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kubecfg-setup" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932514 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="northd" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932522 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="northd" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932533 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932541 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932551 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932558 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932567 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="sbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932574 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="sbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932585 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="nbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932592 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="nbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932603 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-node" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932611 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-node" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932621 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932628 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932769 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932785 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="northd" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932794 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="sbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932802 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932811 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932819 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-ovn-metrics" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932827 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932836 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="nbdb" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932848 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932858 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="kube-rbac-proxy-node" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.932869 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovn-acl-logging" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.932997 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.933008 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.933130 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerName="ovnkube-controller" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.935001 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.941459 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovnkube-controller/3.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.944309 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovn-acl-logging/0.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945015 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f5k4c_a1406b03-70e6-4874-8cfe-5991e43cc720/ovn-controller/0.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945354 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945433 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945455 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945596 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945633 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945445 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945820 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945958 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946021 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946070 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" exitCode=0 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946126 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" exitCode=143 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946183 5000 generic.go:334] "Generic (PLEG): container finished" podID="a1406b03-70e6-4874-8cfe-5991e43cc720" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" exitCode=143 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.945853 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946307 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946328 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946407 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946433 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946447 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946455 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946462 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946469 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946476 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946484 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946490 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946496 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946506 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946518 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946526 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946557 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946565 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946571 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946578 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946584 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946591 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946598 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946604 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946614 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946625 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946634 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946642 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946649 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946658 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946664 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946671 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946678 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946684 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946690 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946699 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f5k4c" event={"ID":"a1406b03-70e6-4874-8cfe-5991e43cc720","Type":"ContainerDied","Data":"6ccb50c47127fcfbee8e906d5bdd07f3c7b97e5d905ee6c5f92433458c7f224b"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946711 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946721 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946728 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946735 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946741 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946747 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946754 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946761 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946768 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.946775 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.949543 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/2.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.949961 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/1.log" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.949997 5000 generic.go:334] "Generic (PLEG): container finished" podID="c10b7118-eb24-495a-bb8f-bc46a3c38799" containerID="56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f" exitCode=2 Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.950020 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerDied","Data":"56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.950044 5000 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0"} Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.950376 5000 scope.go:117] "RemoveContainer" containerID="56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f" Jan 05 21:43:59 crc kubenswrapper[5000]: E0105 21:43:59.950534 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sd8pl_openshift-multus(c10b7118-eb24-495a-bb8f-bc46a3c38799)\"" pod="openshift-multus/multus-sd8pl" podUID="c10b7118-eb24-495a-bb8f-bc46a3c38799" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.970679 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:43:59 crc kubenswrapper[5000]: I0105 21:43:59.990838 5000 scope.go:117] "RemoveContainer" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.002649 5000 scope.go:117] "RemoveContainer" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.013671 5000 scope.go:117] "RemoveContainer" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.021476 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.021532 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2h8f\" (UniqueName: \"kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.021574 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022013 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022047 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022067 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022085 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022102 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022127 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022143 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022156 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022177 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022197 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022211 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022227 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022242 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022269 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022286 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022299 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022307 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022319 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022365 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022381 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022390 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn\") pod \"a1406b03-70e6-4874-8cfe-5991e43cc720\" (UID: \"a1406b03-70e6-4874-8cfe-5991e43cc720\") " Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022412 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022465 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-config\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022486 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-systemd-units\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022506 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-script-lib\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022526 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-var-lib-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022576 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-log-socket\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022602 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovn-node-metrics-cert\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022629 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgch6\" (UniqueName: \"kubernetes.io/projected/646fb284-322e-4c84-819f-b4bc9ba3c6c0-kube-api-access-cgch6\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022652 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022679 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-netns\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022696 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-etc-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022721 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-bin\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022737 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-env-overrides\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022740 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022759 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-kubelet\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022773 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket" (OuterVolumeSpecName: "log-socket") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022775 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-netd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022794 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022819 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022851 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-systemd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022866 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-ovn\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022928 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022946 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-slash\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022968 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-node-log\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.022998 5000 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023008 5000 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023017 5000 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023026 5000 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023034 5000 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-log-socket\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023042 5000 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023051 5000 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023059 5000 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023092 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash" (OuterVolumeSpecName: "host-slash") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023461 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023760 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.023786 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.024038 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.024071 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.024088 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log" (OuterVolumeSpecName: "node-log") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.024096 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.024430 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.025539 5000 scope.go:117] "RemoveContainer" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.026175 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f" (OuterVolumeSpecName: "kube-api-access-x2h8f") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "kube-api-access-x2h8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.026499 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.033633 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a1406b03-70e6-4874-8cfe-5991e43cc720" (UID: "a1406b03-70e6-4874-8cfe-5991e43cc720"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.037551 5000 scope.go:117] "RemoveContainer" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.047577 5000 scope.go:117] "RemoveContainer" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.056697 5000 scope.go:117] "RemoveContainer" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.068775 5000 scope.go:117] "RemoveContainer" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.079999 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.080517 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.080568 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} err="failed to get container status \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.080601 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.081129 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": container with ID starting with a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7 not found: ID does not exist" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.081206 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} err="failed to get container status \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": rpc error: code = NotFound desc = could not find container \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": container with ID starting with a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.081267 5000 scope.go:117] "RemoveContainer" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.081694 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": container with ID starting with 31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612 not found: ID does not exist" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.081724 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} err="failed to get container status \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": rpc error: code = NotFound desc = could not find container \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": container with ID starting with 31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.081745 5000 scope.go:117] "RemoveContainer" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.082386 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": container with ID starting with 8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c not found: ID does not exist" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.082445 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} err="failed to get container status \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": rpc error: code = NotFound desc = could not find container \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": container with ID starting with 8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.082483 5000 scope.go:117] "RemoveContainer" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.082847 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": container with ID starting with e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12 not found: ID does not exist" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.082976 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} err="failed to get container status \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": rpc error: code = NotFound desc = could not find container \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": container with ID starting with e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.083025 5000 scope.go:117] "RemoveContainer" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.083655 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": container with ID starting with 7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532 not found: ID does not exist" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.083717 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} err="failed to get container status \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": rpc error: code = NotFound desc = could not find container \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": container with ID starting with 7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.083746 5000 scope.go:117] "RemoveContainer" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.084186 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": container with ID starting with 51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19 not found: ID does not exist" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.084244 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} err="failed to get container status \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": rpc error: code = NotFound desc = could not find container \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": container with ID starting with 51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.084457 5000 scope.go:117] "RemoveContainer" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.084931 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": container with ID starting with 7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367 not found: ID does not exist" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.084968 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} err="failed to get container status \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": rpc error: code = NotFound desc = could not find container \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": container with ID starting with 7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.084987 5000 scope.go:117] "RemoveContainer" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.085302 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": container with ID starting with 45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059 not found: ID does not exist" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.085361 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} err="failed to get container status \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": rpc error: code = NotFound desc = could not find container \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": container with ID starting with 45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.085400 5000 scope.go:117] "RemoveContainer" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: E0105 21:44:00.085836 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": container with ID starting with 58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29 not found: ID does not exist" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.085867 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} err="failed to get container status \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": rpc error: code = NotFound desc = could not find container \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": container with ID starting with 58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.085914 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.086194 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} err="failed to get container status \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.086233 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.086500 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} err="failed to get container status \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": rpc error: code = NotFound desc = could not find container \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": container with ID starting with a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.086529 5000 scope.go:117] "RemoveContainer" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.086964 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} err="failed to get container status \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": rpc error: code = NotFound desc = could not find container \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": container with ID starting with 31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087014 5000 scope.go:117] "RemoveContainer" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087280 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} err="failed to get container status \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": rpc error: code = NotFound desc = could not find container \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": container with ID starting with 8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087307 5000 scope.go:117] "RemoveContainer" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087593 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} err="failed to get container status \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": rpc error: code = NotFound desc = could not find container \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": container with ID starting with e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087621 5000 scope.go:117] "RemoveContainer" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087917 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} err="failed to get container status \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": rpc error: code = NotFound desc = could not find container \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": container with ID starting with 7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.087954 5000 scope.go:117] "RemoveContainer" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.088387 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} err="failed to get container status \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": rpc error: code = NotFound desc = could not find container \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": container with ID starting with 51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.088414 5000 scope.go:117] "RemoveContainer" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.088632 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} err="failed to get container status \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": rpc error: code = NotFound desc = could not find container \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": container with ID starting with 7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.088664 5000 scope.go:117] "RemoveContainer" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089408 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} err="failed to get container status \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": rpc error: code = NotFound desc = could not find container \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": container with ID starting with 45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089438 5000 scope.go:117] "RemoveContainer" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089667 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} err="failed to get container status \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": rpc error: code = NotFound desc = could not find container \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": container with ID starting with 58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089690 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089907 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} err="failed to get container status \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.089936 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090148 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} err="failed to get container status \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": rpc error: code = NotFound desc = could not find container \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": container with ID starting with a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090167 5000 scope.go:117] "RemoveContainer" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090365 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} err="failed to get container status \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": rpc error: code = NotFound desc = could not find container \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": container with ID starting with 31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090394 5000 scope.go:117] "RemoveContainer" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090625 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} err="failed to get container status \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": rpc error: code = NotFound desc = could not find container \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": container with ID starting with 8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090656 5000 scope.go:117] "RemoveContainer" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090865 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} err="failed to get container status \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": rpc error: code = NotFound desc = could not find container \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": container with ID starting with e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.090884 5000 scope.go:117] "RemoveContainer" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091062 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} err="failed to get container status \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": rpc error: code = NotFound desc = could not find container \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": container with ID starting with 7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091079 5000 scope.go:117] "RemoveContainer" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091275 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} err="failed to get container status \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": rpc error: code = NotFound desc = could not find container \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": container with ID starting with 51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091302 5000 scope.go:117] "RemoveContainer" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091480 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} err="failed to get container status \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": rpc error: code = NotFound desc = could not find container \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": container with ID starting with 7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091503 5000 scope.go:117] "RemoveContainer" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091699 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} err="failed to get container status \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": rpc error: code = NotFound desc = could not find container \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": container with ID starting with 45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.091723 5000 scope.go:117] "RemoveContainer" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092002 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} err="failed to get container status \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": rpc error: code = NotFound desc = could not find container \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": container with ID starting with 58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092039 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092236 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} err="failed to get container status \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092263 5000 scope.go:117] "RemoveContainer" containerID="a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092470 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7"} err="failed to get container status \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": rpc error: code = NotFound desc = could not find container \"a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7\": container with ID starting with a6bbd1308288a0315d22244c99f7826a23070072a0d7b87a0f2c9906f306b1e7 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092496 5000 scope.go:117] "RemoveContainer" containerID="31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092676 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612"} err="failed to get container status \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": rpc error: code = NotFound desc = could not find container \"31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612\": container with ID starting with 31b2f3c16226f198da5d1c110e6574fe336a0f9436a5e8d5f7850e8294cba612 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092695 5000 scope.go:117] "RemoveContainer" containerID="8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092886 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c"} err="failed to get container status \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": rpc error: code = NotFound desc = could not find container \"8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c\": container with ID starting with 8ed4847aa04a53ce3eec64b6f5bd0cf23a9c05c2785877f00242b2880f66927c not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.092928 5000 scope.go:117] "RemoveContainer" containerID="e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093125 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12"} err="failed to get container status \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": rpc error: code = NotFound desc = could not find container \"e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12\": container with ID starting with e8ec1cd32d567fb90509a2f28506a297686360e3d8391c68368b3e59a781ab12 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093144 5000 scope.go:117] "RemoveContainer" containerID="7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093336 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532"} err="failed to get container status \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": rpc error: code = NotFound desc = could not find container \"7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532\": container with ID starting with 7b07a029dd8b8626384c45d4704e5ad2a704b38c9f743428b418e02cab21e532 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093361 5000 scope.go:117] "RemoveContainer" containerID="51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093528 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19"} err="failed to get container status \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": rpc error: code = NotFound desc = could not find container \"51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19\": container with ID starting with 51fc7b4f5ace1c1b61bfc74c8451ca0cb42dc56dba8fb3c18b514c1e76ad1f19 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093551 5000 scope.go:117] "RemoveContainer" containerID="7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093718 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367"} err="failed to get container status \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": rpc error: code = NotFound desc = could not find container \"7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367\": container with ID starting with 7da9321d37816bcb29f4af1cff91da740ef035bf4783533fec0d27eb287bf367 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093735 5000 scope.go:117] "RemoveContainer" containerID="45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093925 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059"} err="failed to get container status \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": rpc error: code = NotFound desc = could not find container \"45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059\": container with ID starting with 45a505942f9b6d881b76fa951c14b0fb1022c14ab19cb7aafe8a27f680895059 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.093939 5000 scope.go:117] "RemoveContainer" containerID="58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.094107 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29"} err="failed to get container status \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": rpc error: code = NotFound desc = could not find container \"58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29\": container with ID starting with 58f44bc5b0329e5b4eb3ebfc73c9a9a9ddaae03931ebfc05c9506d0dfacd2c29 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.094121 5000 scope.go:117] "RemoveContainer" containerID="fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.094267 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915"} err="failed to get container status \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": rpc error: code = NotFound desc = could not find container \"fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915\": container with ID starting with fa0a9e5b84ef4243d95cc879e843c7be41c1d74a0d472c177b10d3d524cc4915 not found: ID does not exist" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123609 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-slash\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123647 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-node-log\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123664 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-config\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123679 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-systemd-units\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123695 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-script-lib\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123712 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-var-lib-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123733 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-log-socket\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123748 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovn-node-metrics-cert\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123765 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgch6\" (UniqueName: \"kubernetes.io/projected/646fb284-322e-4c84-819f-b4bc9ba3c6c0-kube-api-access-cgch6\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123782 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123785 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-systemd-units\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123802 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-netns\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123800 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-slash\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123864 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-etc-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123863 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-node-log\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123907 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-netns\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123819 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-etc-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123929 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-log-socket\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.123955 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124045 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-bin\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124070 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-env-overrides\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124104 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-kubelet\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124122 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-netd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124210 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124268 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-systemd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124288 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-ovn\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124313 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124370 5000 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-node-log\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124380 5000 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124391 5000 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124402 5000 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1406b03-70e6-4874-8cfe-5991e43cc720-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124411 5000 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124419 5000 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124428 5000 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124436 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2h8f\" (UniqueName: \"kubernetes.io/projected/a1406b03-70e6-4874-8cfe-5991e43cc720-kube-api-access-x2h8f\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124445 5000 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-slash\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124454 5000 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1406b03-70e6-4874-8cfe-5991e43cc720-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124464 5000 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124473 5000 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1406b03-70e6-4874-8cfe-5991e43cc720-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124500 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124523 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-kubelet\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124545 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-netd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124564 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-run-ovn-kubernetes\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124586 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-systemd\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124606 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-run-ovn\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124091 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-var-lib-openvswitch\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124626 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/646fb284-322e-4c84-819f-b4bc9ba3c6c0-host-cni-bin\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124736 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-env-overrides\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.124911 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-config\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.125029 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovnkube-script-lib\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.127772 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/646fb284-322e-4c84-819f-b4bc9ba3c6c0-ovn-node-metrics-cert\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.138950 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgch6\" (UniqueName: \"kubernetes.io/projected/646fb284-322e-4c84-819f-b4bc9ba3c6c0-kube-api-access-cgch6\") pod \"ovnkube-node-zm2n7\" (UID: \"646fb284-322e-4c84-819f-b4bc9ba3c6c0\") " pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.250629 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.277803 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f5k4c"] Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.285482 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f5k4c"] Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.346420 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-pgdwz" Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.955914 5000 generic.go:334] "Generic (PLEG): container finished" podID="646fb284-322e-4c84-819f-b4bc9ba3c6c0" containerID="3124c82e6746d5443ce30db32f85ab897e26df4fafe21d71c9e5f36112413964" exitCode=0 Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.955995 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerDied","Data":"3124c82e6746d5443ce30db32f85ab897e26df4fafe21d71c9e5f36112413964"} Jan 05 21:44:00 crc kubenswrapper[5000]: I0105 21:44:00.956234 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"fd4e93851112383bb5d2679ecfb921f0019503245f876cad60bb55c2c0c13f29"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.330494 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1406b03-70e6-4874-8cfe-5991e43cc720" path="/var/lib/kubelet/pods/a1406b03-70e6-4874-8cfe-5991e43cc720/volumes" Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.967696 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"4097c56607f9696cbcdc378beb2e9a658b9a727d0bd520c62e3fd34dfc7d622c"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.968117 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"9ed4c614e31753ef56ccf27a88a3ff38f3798446a0b9169fa83396915ec7d5e0"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.968132 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"f07d81dac70aa1e97c1a387672a51764ac57e30143c4cd1fcf7bb4bc29567599"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.968148 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"5eb47965051b534a5c622605be8893dbcb0f6b4a98f888f84a459603372c366d"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.968158 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"b84fa19759699ed967bd5488cfef613bd40a080f8f178e95ccbc7c8f66f347e6"} Jan 05 21:44:01 crc kubenswrapper[5000]: I0105 21:44:01.968169 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"b769b5300209f024846512a07de71e035f2112afb8cdc9e06556c61f017adfee"} Jan 05 21:44:03 crc kubenswrapper[5000]: I0105 21:44:03.980623 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"ef4b9f1c24c304a61ed6a5aacaa66273d2daa82f94787c7b5b3a7f08caadf077"} Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.544929 5000 scope.go:117] "RemoveContainer" containerID="32a3ac5e5f62e943213dbd1b8db1a8c5612797f483ce80d623ba3f27142119cf" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.562541 5000 scope.go:117] "RemoveContainer" containerID="9b15de792bbc6eefd87d8aa91f69b1002f2c3c602860e8ba8bcf6eef2c889bb7" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.581029 5000 scope.go:117] "RemoveContainer" containerID="d9046be61fa273923c77fe35be04fbf84a891ee4c803f73f42de122fa83f8ba0" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.992605 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/2.log" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.997080 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" event={"ID":"646fb284-322e-4c84-819f-b4bc9ba3c6c0","Type":"ContainerStarted","Data":"0e9aed97e8912ebbffc1671969ba169a5fd503df97e75b1370fa442ae1ed8424"} Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.997292 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.997329 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:05 crc kubenswrapper[5000]: I0105 21:44:05.997341 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:06 crc kubenswrapper[5000]: I0105 21:44:06.022584 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:06 crc kubenswrapper[5000]: I0105 21:44:06.022833 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:06 crc kubenswrapper[5000]: I0105 21:44:06.040311 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" podStartSLOduration=7.040291565 podStartE2EDuration="7.040291565s" podCreationTimestamp="2026-01-05 21:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:44:06.02437747 +0000 UTC m=+600.980579949" watchObservedRunningTime="2026-01-05 21:44:06.040291565 +0000 UTC m=+600.996494034" Jan 05 21:44:13 crc kubenswrapper[5000]: I0105 21:44:13.323932 5000 scope.go:117] "RemoveContainer" containerID="56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f" Jan 05 21:44:13 crc kubenswrapper[5000]: E0105 21:44:13.324676 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sd8pl_openshift-multus(c10b7118-eb24-495a-bb8f-bc46a3c38799)\"" pod="openshift-multus/multus-sd8pl" podUID="c10b7118-eb24-495a-bb8f-bc46a3c38799" Jan 05 21:44:24 crc kubenswrapper[5000]: I0105 21:44:24.323479 5000 scope.go:117] "RemoveContainer" containerID="56e710d4bb2d817674bc8f198e27521b38e972da7d83bffffca3188109845c6f" Jan 05 21:44:26 crc kubenswrapper[5000]: I0105 21:44:26.109240 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sd8pl_c10b7118-eb24-495a-bb8f-bc46a3c38799/kube-multus/2.log" Jan 05 21:44:26 crc kubenswrapper[5000]: I0105 21:44:26.109653 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sd8pl" event={"ID":"c10b7118-eb24-495a-bb8f-bc46a3c38799","Type":"ContainerStarted","Data":"7ec8b742df4d9ecfc353021fe5b9f159b74c71295b688f57b9ebb8be00ea1363"} Jan 05 21:44:30 crc kubenswrapper[5000]: I0105 21:44:30.275771 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zm2n7" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.064481 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j"] Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.067177 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.069643 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.073374 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j"] Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.211307 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.211373 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflds\" (UniqueName: \"kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.211443 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.312936 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.313216 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflds\" (UniqueName: \"kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.313417 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.313515 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.313760 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.333541 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflds\" (UniqueName: \"kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.387524 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:40 crc kubenswrapper[5000]: I0105 21:44:40.769231 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j"] Jan 05 21:44:41 crc kubenswrapper[5000]: I0105 21:44:41.197413 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc49396f-e546-49a1-afc3-79b06accebaa" containerID="26654c34bc32619d6371c3ba8d46ea24a71a7bc9835fcd3096ba1e4d5d9e19c8" exitCode=0 Jan 05 21:44:41 crc kubenswrapper[5000]: I0105 21:44:41.197461 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" event={"ID":"dc49396f-e546-49a1-afc3-79b06accebaa","Type":"ContainerDied","Data":"26654c34bc32619d6371c3ba8d46ea24a71a7bc9835fcd3096ba1e4d5d9e19c8"} Jan 05 21:44:41 crc kubenswrapper[5000]: I0105 21:44:41.197490 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" event={"ID":"dc49396f-e546-49a1-afc3-79b06accebaa","Type":"ContainerStarted","Data":"38786099dde2bbb4d870bfff5208a20d978d9806c117f039418ec7f5aed01649"} Jan 05 21:44:43 crc kubenswrapper[5000]: I0105 21:44:43.209087 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc49396f-e546-49a1-afc3-79b06accebaa" containerID="10b84a7804c8c5791445a78d319bc082fbb532f2a75ef0bd658e756351e06a58" exitCode=0 Jan 05 21:44:43 crc kubenswrapper[5000]: I0105 21:44:43.209124 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" event={"ID":"dc49396f-e546-49a1-afc3-79b06accebaa","Type":"ContainerDied","Data":"10b84a7804c8c5791445a78d319bc082fbb532f2a75ef0bd658e756351e06a58"} Jan 05 21:44:44 crc kubenswrapper[5000]: I0105 21:44:44.215992 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc49396f-e546-49a1-afc3-79b06accebaa" containerID="8a577982caa91ccb1fa95ccc52e16fa30080a7be808e432f3e6368643e91e9b7" exitCode=0 Jan 05 21:44:44 crc kubenswrapper[5000]: I0105 21:44:44.216088 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" event={"ID":"dc49396f-e546-49a1-afc3-79b06accebaa","Type":"ContainerDied","Data":"8a577982caa91ccb1fa95ccc52e16fa30080a7be808e432f3e6368643e91e9b7"} Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.435089 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.572401 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util\") pod \"dc49396f-e546-49a1-afc3-79b06accebaa\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.572461 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zflds\" (UniqueName: \"kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds\") pod \"dc49396f-e546-49a1-afc3-79b06accebaa\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.572538 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle\") pod \"dc49396f-e546-49a1-afc3-79b06accebaa\" (UID: \"dc49396f-e546-49a1-afc3-79b06accebaa\") " Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.573227 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle" (OuterVolumeSpecName: "bundle") pod "dc49396f-e546-49a1-afc3-79b06accebaa" (UID: "dc49396f-e546-49a1-afc3-79b06accebaa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.581281 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds" (OuterVolumeSpecName: "kube-api-access-zflds") pod "dc49396f-e546-49a1-afc3-79b06accebaa" (UID: "dc49396f-e546-49a1-afc3-79b06accebaa"). InnerVolumeSpecName "kube-api-access-zflds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.586787 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util" (OuterVolumeSpecName: "util") pod "dc49396f-e546-49a1-afc3-79b06accebaa" (UID: "dc49396f-e546-49a1-afc3-79b06accebaa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.673848 5000 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-util\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.673877 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zflds\" (UniqueName: \"kubernetes.io/projected/dc49396f-e546-49a1-afc3-79b06accebaa-kube-api-access-zflds\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:45 crc kubenswrapper[5000]: I0105 21:44:45.673934 5000 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc49396f-e546-49a1-afc3-79b06accebaa-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:44:46 crc kubenswrapper[5000]: I0105 21:44:46.230496 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" event={"ID":"dc49396f-e546-49a1-afc3-79b06accebaa","Type":"ContainerDied","Data":"38786099dde2bbb4d870bfff5208a20d978d9806c117f039418ec7f5aed01649"} Jan 05 21:44:46 crc kubenswrapper[5000]: I0105 21:44:46.230552 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38786099dde2bbb4d870bfff5208a20d978d9806c117f039418ec7f5aed01649" Jan 05 21:44:46 crc kubenswrapper[5000]: I0105 21:44:46.230562 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.907850 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-r56zg"] Jan 05 21:44:48 crc kubenswrapper[5000]: E0105 21:44:48.908345 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="extract" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.908357 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="extract" Jan 05 21:44:48 crc kubenswrapper[5000]: E0105 21:44:48.908372 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="pull" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.908381 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="pull" Jan 05 21:44:48 crc kubenswrapper[5000]: E0105 21:44:48.908392 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="util" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.908399 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="util" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.908507 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc49396f-e546-49a1-afc3-79b06accebaa" containerName="extract" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.908955 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.910305 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rv8pd" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.910751 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.912442 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 05 21:44:48 crc kubenswrapper[5000]: I0105 21:44:48.922037 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-r56zg"] Jan 05 21:44:49 crc kubenswrapper[5000]: I0105 21:44:49.014431 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgv8\" (UniqueName: \"kubernetes.io/projected/e2491ff3-21bb-4019-b297-1e6b0bdd9707-kube-api-access-xfgv8\") pod \"nmstate-operator-6769fb99d-r56zg\" (UID: \"e2491ff3-21bb-4019-b297-1e6b0bdd9707\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" Jan 05 21:44:49 crc kubenswrapper[5000]: I0105 21:44:49.115121 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgv8\" (UniqueName: \"kubernetes.io/projected/e2491ff3-21bb-4019-b297-1e6b0bdd9707-kube-api-access-xfgv8\") pod \"nmstate-operator-6769fb99d-r56zg\" (UID: \"e2491ff3-21bb-4019-b297-1e6b0bdd9707\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" Jan 05 21:44:49 crc kubenswrapper[5000]: I0105 21:44:49.134136 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgv8\" (UniqueName: \"kubernetes.io/projected/e2491ff3-21bb-4019-b297-1e6b0bdd9707-kube-api-access-xfgv8\") pod \"nmstate-operator-6769fb99d-r56zg\" (UID: \"e2491ff3-21bb-4019-b297-1e6b0bdd9707\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" Jan 05 21:44:49 crc kubenswrapper[5000]: I0105 21:44:49.223727 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" Jan 05 21:44:49 crc kubenswrapper[5000]: I0105 21:44:49.403793 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-r56zg"] Jan 05 21:44:50 crc kubenswrapper[5000]: I0105 21:44:50.249532 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" event={"ID":"e2491ff3-21bb-4019-b297-1e6b0bdd9707","Type":"ContainerStarted","Data":"09cfecd3d857bf6f1343d48d08e0c53e753b83d7e53b0891510c66dfd4b1e985"} Jan 05 21:44:52 crc kubenswrapper[5000]: I0105 21:44:52.260050 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" event={"ID":"e2491ff3-21bb-4019-b297-1e6b0bdd9707","Type":"ContainerStarted","Data":"061a41a6757e8044c206b027cfb5731be4d14b348d0910ae4b1c195c2b8a8971"} Jan 05 21:44:52 crc kubenswrapper[5000]: I0105 21:44:52.311612 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-r56zg" podStartSLOduration=2.339019609 podStartE2EDuration="4.311594104s" podCreationTimestamp="2026-01-05 21:44:48 +0000 UTC" firstStartedPulling="2026-01-05 21:44:49.413271256 +0000 UTC m=+644.369473735" lastFinishedPulling="2026-01-05 21:44:51.385845761 +0000 UTC m=+646.342048230" observedRunningTime="2026-01-05 21:44:52.305987784 +0000 UTC m=+647.262190263" watchObservedRunningTime="2026-01-05 21:44:52.311594104 +0000 UTC m=+647.267796593" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.139185 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.140683 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.145176 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wbcr6" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.156170 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.162158 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.163215 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.182834 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.183108 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.187342 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.188212 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.191376 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.191486 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.195617 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sgg82"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.199123 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.217455 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.263983 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8dw\" (UniqueName: \"kubernetes.io/projected/e754b051-d59b-4f7b-9bd4-8ac140b5a8a3-kube-api-access-pc8dw\") pod \"nmstate-metrics-7f7f7578db-9zbc5\" (UID: \"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.322814 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.324948 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.336244 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.336526 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.336776 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-548vz" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.344611 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365133 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-nmstate-lock\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365187 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nmlg\" (UniqueName: \"kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365226 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365265 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-ovs-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365325 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc8dw\" (UniqueName: \"kubernetes.io/projected/e754b051-d59b-4f7b-9bd4-8ac140b5a8a3-kube-api-access-pc8dw\") pod \"nmstate-metrics-7f7f7578db-9zbc5\" (UID: \"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365353 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dg68\" (UniqueName: \"kubernetes.io/projected/061efcae-2cef-41f7-bae8-69730db02cf2-kube-api-access-4dg68\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365383 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/251a5c5e-01cb-474f-9271-1d8ec430e9ac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365435 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365469 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zddr\" (UniqueName: \"kubernetes.io/projected/251a5c5e-01cb-474f-9271-1d8ec430e9ac-kube-api-access-6zddr\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.365507 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-dbus-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.386038 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc8dw\" (UniqueName: \"kubernetes.io/projected/e754b051-d59b-4f7b-9bd4-8ac140b5a8a3-kube-api-access-pc8dw\") pod \"nmstate-metrics-7f7f7578db-9zbc5\" (UID: \"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466489 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466543 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zddr\" (UniqueName: \"kubernetes.io/projected/251a5c5e-01cb-474f-9271-1d8ec430e9ac-kube-api-access-6zddr\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466578 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-dbus-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466601 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-nmstate-lock\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466620 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nmlg\" (UniqueName: \"kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466657 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466689 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/51c01670-2f5f-45e5-b50c-10034384df7b-kube-api-access-j5vr9\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466725 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-ovs-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466764 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dg68\" (UniqueName: \"kubernetes.io/projected/061efcae-2cef-41f7-bae8-69730db02cf2-kube-api-access-4dg68\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466835 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c01670-2f5f-45e5-b50c-10034384df7b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466860 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c01670-2f5f-45e5-b50c-10034384df7b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.466899 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/251a5c5e-01cb-474f-9271-1d8ec430e9ac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.467853 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-nmstate-lock\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.467868 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-dbus-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.468179 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.468283 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/061efcae-2cef-41f7-bae8-69730db02cf2-ovs-socket\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.474131 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/251a5c5e-01cb-474f-9271-1d8ec430e9ac-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.474654 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.486649 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dg68\" (UniqueName: \"kubernetes.io/projected/061efcae-2cef-41f7-bae8-69730db02cf2-kube-api-access-4dg68\") pod \"nmstate-handler-sgg82\" (UID: \"061efcae-2cef-41f7-bae8-69730db02cf2\") " pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.487113 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nmlg\" (UniqueName: \"kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg\") pod \"collect-profiles-29460825-pxzpq\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.487591 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.496423 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zddr\" (UniqueName: \"kubernetes.io/projected/251a5c5e-01cb-474f-9271-1d8ec430e9ac-kube-api-access-6zddr\") pod \"nmstate-webhook-f8fb84555-hf8ck\" (UID: \"251a5c5e-01cb-474f-9271-1d8ec430e9ac\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.515143 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.531593 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.533596 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fc575d5d9-2d4rp"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.534435 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.546249 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.551288 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc575d5d9-2d4rp"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.569164 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c01670-2f5f-45e5-b50c-10034384df7b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.569263 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/51c01670-2f5f-45e5-b50c-10034384df7b-kube-api-access-j5vr9\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.569299 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c01670-2f5f-45e5-b50c-10034384df7b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.572646 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/51c01670-2f5f-45e5-b50c-10034384df7b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.587123 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/51c01670-2f5f-45e5-b50c-10034384df7b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.593669 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vr9\" (UniqueName: \"kubernetes.io/projected/51c01670-2f5f-45e5-b50c-10034384df7b-kube-api-access-j5vr9\") pod \"nmstate-console-plugin-6ff7998486-mwb84\" (UID: \"51c01670-2f5f-45e5-b50c-10034384df7b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.657077 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670575 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-service-ca\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670618 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8tq\" (UniqueName: \"kubernetes.io/projected/04de1794-3075-4bfb-955e-eeaa2f18d6c9-kube-api-access-ld8tq\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670649 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670676 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-oauth-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670750 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670780 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-trusted-ca-bundle\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.670818 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-oauth-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773000 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773032 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-trusted-ca-bundle\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773063 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-oauth-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773088 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-service-ca\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773110 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8tq\" (UniqueName: \"kubernetes.io/projected/04de1794-3075-4bfb-955e-eeaa2f18d6c9-kube-api-access-ld8tq\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773134 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.773160 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-oauth-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.774520 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-trusted-ca-bundle\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.774621 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-oauth-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.775657 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-service-ca\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.776301 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.778974 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-serving-cert\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.779394 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04de1794-3075-4bfb-955e-eeaa2f18d6c9-console-oauth-config\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.790590 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8tq\" (UniqueName: \"kubernetes.io/projected/04de1794-3075-4bfb-955e-eeaa2f18d6c9-kube-api-access-ld8tq\") pod \"console-6fc575d5d9-2d4rp\" (UID: \"04de1794-3075-4bfb-955e-eeaa2f18d6c9\") " pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.792709 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq"] Jan 05 21:45:00 crc kubenswrapper[5000]: W0105 21:45:00.798177 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb634923c_9274_4da5_9d49_783d92f632e9.slice/crio-6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069 WatchSource:0}: Error finding container 6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069: Status 404 returned error can't find the container with id 6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069 Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.822819 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck"] Jan 05 21:45:00 crc kubenswrapper[5000]: W0105 21:45:00.831708 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod251a5c5e_01cb_474f_9271_1d8ec430e9ac.slice/crio-3f2218d71931f18a835dc9ef3ca5000e6b6a645663ba59fc661e1558ff143110 WatchSource:0}: Error finding container 3f2218d71931f18a835dc9ef3ca5000e6b6a645663ba59fc661e1558ff143110: Status 404 returned error can't find the container with id 3f2218d71931f18a835dc9ef3ca5000e6b6a645663ba59fc661e1558ff143110 Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.870400 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.870458 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84"] Jan 05 21:45:00 crc kubenswrapper[5000]: I0105 21:45:00.945181 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5"] Jan 05 21:45:00 crc kubenswrapper[5000]: W0105 21:45:00.952818 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode754b051_d59b_4f7b_9bd4_8ac140b5a8a3.slice/crio-aa970de1757c4a21623e4638f71a4f697884669d24f21d79095974aa8917a51d WatchSource:0}: Error finding container aa970de1757c4a21623e4638f71a4f697884669d24f21d79095974aa8917a51d: Status 404 returned error can't find the container with id aa970de1757c4a21623e4638f71a4f697884669d24f21d79095974aa8917a51d Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.053084 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fc575d5d9-2d4rp"] Jan 05 21:45:01 crc kubenswrapper[5000]: W0105 21:45:01.060949 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04de1794_3075_4bfb_955e_eeaa2f18d6c9.slice/crio-8b668dfc0a41d40c66de4eb6aeb26fb52d6a4714afaa8170c8565a08ad80e7c4 WatchSource:0}: Error finding container 8b668dfc0a41d40c66de4eb6aeb26fb52d6a4714afaa8170c8565a08ad80e7c4: Status 404 returned error can't find the container with id 8b668dfc0a41d40c66de4eb6aeb26fb52d6a4714afaa8170c8565a08ad80e7c4 Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.315726 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sgg82" event={"ID":"061efcae-2cef-41f7-bae8-69730db02cf2","Type":"ContainerStarted","Data":"814ee3abf761c9fb63b893bfd441cb9791f5cfe2636b73857d7ce6124f95a988"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.316639 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" event={"ID":"51c01670-2f5f-45e5-b50c-10034384df7b","Type":"ContainerStarted","Data":"cb470966488624f8d4d61007b7220b3844e9a0801b726e1f3a21d96251370207"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.317693 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc575d5d9-2d4rp" event={"ID":"04de1794-3075-4bfb-955e-eeaa2f18d6c9","Type":"ContainerStarted","Data":"b39a2e5ab5bfeebc90b589d273807ef56521141550cca1bc537673407e379389"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.317720 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fc575d5d9-2d4rp" event={"ID":"04de1794-3075-4bfb-955e-eeaa2f18d6c9","Type":"ContainerStarted","Data":"8b668dfc0a41d40c66de4eb6aeb26fb52d6a4714afaa8170c8565a08ad80e7c4"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.320133 5000 generic.go:334] "Generic (PLEG): container finished" podID="b634923c-9274-4da5-9d49-783d92f632e9" containerID="ac6338554e484ded93f7e8c247f935069a303a184ed78beee6d9d7d431a52eae" exitCode=0 Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.320209 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" event={"ID":"b634923c-9274-4da5-9d49-783d92f632e9","Type":"ContainerDied","Data":"ac6338554e484ded93f7e8c247f935069a303a184ed78beee6d9d7d431a52eae"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.320235 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" event={"ID":"b634923c-9274-4da5-9d49-783d92f632e9","Type":"ContainerStarted","Data":"6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.321910 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" event={"ID":"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3","Type":"ContainerStarted","Data":"aa970de1757c4a21623e4638f71a4f697884669d24f21d79095974aa8917a51d"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.328978 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" event={"ID":"251a5c5e-01cb-474f-9271-1d8ec430e9ac","Type":"ContainerStarted","Data":"3f2218d71931f18a835dc9ef3ca5000e6b6a645663ba59fc661e1558ff143110"} Jan 05 21:45:01 crc kubenswrapper[5000]: I0105 21:45:01.333453 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fc575d5d9-2d4rp" podStartSLOduration=1.333439565 podStartE2EDuration="1.333439565s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:45:01.332879429 +0000 UTC m=+656.289081898" watchObservedRunningTime="2026-01-05 21:45:01.333439565 +0000 UTC m=+656.289642034" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.575470 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.699466 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume\") pod \"b634923c-9274-4da5-9d49-783d92f632e9\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.699610 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nmlg\" (UniqueName: \"kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg\") pod \"b634923c-9274-4da5-9d49-783d92f632e9\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.699656 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume\") pod \"b634923c-9274-4da5-9d49-783d92f632e9\" (UID: \"b634923c-9274-4da5-9d49-783d92f632e9\") " Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.700455 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "b634923c-9274-4da5-9d49-783d92f632e9" (UID: "b634923c-9274-4da5-9d49-783d92f632e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.704826 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg" (OuterVolumeSpecName: "kube-api-access-9nmlg") pod "b634923c-9274-4da5-9d49-783d92f632e9" (UID: "b634923c-9274-4da5-9d49-783d92f632e9"). InnerVolumeSpecName "kube-api-access-9nmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.716084 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b634923c-9274-4da5-9d49-783d92f632e9" (UID: "b634923c-9274-4da5-9d49-783d92f632e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.800825 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nmlg\" (UniqueName: \"kubernetes.io/projected/b634923c-9274-4da5-9d49-783d92f632e9-kube-api-access-9nmlg\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.800867 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b634923c-9274-4da5-9d49-783d92f632e9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:02 crc kubenswrapper[5000]: I0105 21:45:02.800879 5000 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b634923c-9274-4da5-9d49-783d92f632e9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:03 crc kubenswrapper[5000]: I0105 21:45:03.341338 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" event={"ID":"b634923c-9274-4da5-9d49-783d92f632e9","Type":"ContainerDied","Data":"6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069"} Jan 05 21:45:03 crc kubenswrapper[5000]: I0105 21:45:03.341600 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e8f77fcfe9b35436adf342b5ee84ea8ea2986117346ca1f4c45fd6b6218f069" Jan 05 21:45:03 crc kubenswrapper[5000]: I0105 21:45:03.341375 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq" Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.348916 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" event={"ID":"51c01670-2f5f-45e5-b50c-10034384df7b","Type":"ContainerStarted","Data":"e0e32e39d018a7746a3cd676fe4a5fe3a4c7d616890e70b3b76e0fce0b4466fd"} Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.351418 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" event={"ID":"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3","Type":"ContainerStarted","Data":"7fbfe74656b0c48675aab318eb647114e4890c2cb0915ec7d302d675bb981bec"} Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.352752 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" event={"ID":"251a5c5e-01cb-474f-9271-1d8ec430e9ac","Type":"ContainerStarted","Data":"2a85648e4416d79059806cf9db0c348749a232ce08ef873edda1a996b0b97bc8"} Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.352924 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.355234 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sgg82" event={"ID":"061efcae-2cef-41f7-bae8-69730db02cf2","Type":"ContainerStarted","Data":"02490c9b7d0c622e5b48710934c221bb1edcee484e88e9fd33ad6e3372ee725f"} Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.355374 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.369245 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-mwb84" podStartSLOduration=1.571876104 podStartE2EDuration="4.369218796s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="2026-01-05 21:45:00.886577086 +0000 UTC m=+655.842779555" lastFinishedPulling="2026-01-05 21:45:03.683919778 +0000 UTC m=+658.640122247" observedRunningTime="2026-01-05 21:45:04.361348411 +0000 UTC m=+659.317550890" watchObservedRunningTime="2026-01-05 21:45:04.369218796 +0000 UTC m=+659.325421285" Jan 05 21:45:04 crc kubenswrapper[5000]: I0105 21:45:04.387345 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sgg82" podStartSLOduration=1.30980032 podStartE2EDuration="4.387326213s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="2026-01-05 21:45:00.606168859 +0000 UTC m=+655.562371328" lastFinishedPulling="2026-01-05 21:45:03.683694752 +0000 UTC m=+658.639897221" observedRunningTime="2026-01-05 21:45:04.378010967 +0000 UTC m=+659.334213436" watchObservedRunningTime="2026-01-05 21:45:04.387326213 +0000 UTC m=+659.343528682" Jan 05 21:45:05 crc kubenswrapper[5000]: I0105 21:45:05.347728 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" podStartSLOduration=2.494257411 podStartE2EDuration="5.347707695s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="2026-01-05 21:45:00.833372537 +0000 UTC m=+655.789575006" lastFinishedPulling="2026-01-05 21:45:03.686822821 +0000 UTC m=+658.643025290" observedRunningTime="2026-01-05 21:45:04.396714641 +0000 UTC m=+659.352917130" watchObservedRunningTime="2026-01-05 21:45:05.347707695 +0000 UTC m=+660.303910164" Jan 05 21:45:06 crc kubenswrapper[5000]: I0105 21:45:06.368546 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" event={"ID":"e754b051-d59b-4f7b-9bd4-8ac140b5a8a3","Type":"ContainerStarted","Data":"dd33eb1928b8d138b05b0f95587424587abf2f4cec3bb48d06cce72ee36f2fb2"} Jan 05 21:45:06 crc kubenswrapper[5000]: I0105 21:45:06.384436 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-9zbc5" podStartSLOduration=1.418958578 podStartE2EDuration="6.384416827s" podCreationTimestamp="2026-01-05 21:45:00 +0000 UTC" firstStartedPulling="2026-01-05 21:45:00.958276203 +0000 UTC m=+655.914478672" lastFinishedPulling="2026-01-05 21:45:05.923734452 +0000 UTC m=+660.879936921" observedRunningTime="2026-01-05 21:45:06.382501172 +0000 UTC m=+661.338703651" watchObservedRunningTime="2026-01-05 21:45:06.384416827 +0000 UTC m=+661.340619296" Jan 05 21:45:10 crc kubenswrapper[5000]: I0105 21:45:10.582635 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sgg82" Jan 05 21:45:10 crc kubenswrapper[5000]: I0105 21:45:10.871790 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:10 crc kubenswrapper[5000]: I0105 21:45:10.871860 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:10 crc kubenswrapper[5000]: I0105 21:45:10.879086 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:11 crc kubenswrapper[5000]: I0105 21:45:11.399953 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fc575d5d9-2d4rp" Jan 05 21:45:11 crc kubenswrapper[5000]: I0105 21:45:11.472391 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:45:20 crc kubenswrapper[5000]: I0105 21:45:20.543398 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-hf8ck" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.861744 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl"] Jan 05 21:45:32 crc kubenswrapper[5000]: E0105 21:45:32.862589 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b634923c-9274-4da5-9d49-783d92f632e9" containerName="collect-profiles" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.862606 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b634923c-9274-4da5-9d49-783d92f632e9" containerName="collect-profiles" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.862728 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b634923c-9274-4da5-9d49-783d92f632e9" containerName="collect-profiles" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.863638 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.865511 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.871265 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl"] Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.987900 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.987988 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7fkq\" (UniqueName: \"kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:32 crc kubenswrapper[5000]: I0105 21:45:32.988216 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.089737 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7fkq\" (UniqueName: \"kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.089805 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.089835 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.090290 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.090588 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.107428 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7fkq\" (UniqueName: \"kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.181141 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:33 crc kubenswrapper[5000]: I0105 21:45:33.619942 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl"] Jan 05 21:45:34 crc kubenswrapper[5000]: I0105 21:45:34.538818 5000 generic.go:334] "Generic (PLEG): container finished" podID="ec09c357-2496-458f-8c66-3acb727c58bd" containerID="93b08dd0493cea3bf29c729c4c20221fea9ffa8cafdaeff83454b1ea9eef24e9" exitCode=0 Jan 05 21:45:34 crc kubenswrapper[5000]: I0105 21:45:34.538855 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" event={"ID":"ec09c357-2496-458f-8c66-3acb727c58bd","Type":"ContainerDied","Data":"93b08dd0493cea3bf29c729c4c20221fea9ffa8cafdaeff83454b1ea9eef24e9"} Jan 05 21:45:34 crc kubenswrapper[5000]: I0105 21:45:34.538911 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" event={"ID":"ec09c357-2496-458f-8c66-3acb727c58bd","Type":"ContainerStarted","Data":"f2fa10038dcd0d91d66221fbe334fde61a2636873bc81918141edf62fbb73dda"} Jan 05 21:45:36 crc kubenswrapper[5000]: I0105 21:45:36.546350 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7mvq2" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" containerID="cri-o://f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981" gracePeriod=15 Jan 05 21:45:36 crc kubenswrapper[5000]: I0105 21:45:36.564757 5000 generic.go:334] "Generic (PLEG): container finished" podID="ec09c357-2496-458f-8c66-3acb727c58bd" containerID="090bffb1bc04e03e33d72a85f78490b63f017d4244d9b717f910c495c61203ac" exitCode=0 Jan 05 21:45:36 crc kubenswrapper[5000]: I0105 21:45:36.564799 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" event={"ID":"ec09c357-2496-458f-8c66-3acb727c58bd","Type":"ContainerDied","Data":"090bffb1bc04e03e33d72a85f78490b63f017d4244d9b717f910c495c61203ac"} Jan 05 21:45:36 crc kubenswrapper[5000]: I0105 21:45:36.974982 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7mvq2_71825513-a9cf-4528-962f-b0c05006bdcd/console/0.log" Jan 05 21:45:36 crc kubenswrapper[5000]: I0105 21:45:36.975415 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161735 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hkpb\" (UniqueName: \"kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161871 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161906 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161929 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161947 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.161984 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.162012 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca\") pod \"71825513-a9cf-4528-962f-b0c05006bdcd\" (UID: \"71825513-a9cf-4528-962f-b0c05006bdcd\") " Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.162803 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca" (OuterVolumeSpecName: "service-ca") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.162827 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.162962 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config" (OuterVolumeSpecName: "console-config") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.163069 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.168293 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb" (OuterVolumeSpecName: "kube-api-access-4hkpb") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "kube-api-access-4hkpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.168838 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.170026 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "71825513-a9cf-4528-962f-b0c05006bdcd" (UID: "71825513-a9cf-4528-962f-b0c05006bdcd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.263393 5000 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.263648 5000 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71825513-a9cf-4528-962f-b0c05006bdcd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.263707 5000 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.263804 5000 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-console-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.263928 5000 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.264015 5000 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71825513-a9cf-4528-962f-b0c05006bdcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.264087 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hkpb\" (UniqueName: \"kubernetes.io/projected/71825513-a9cf-4528-962f-b0c05006bdcd-kube-api-access-4hkpb\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.583003 5000 generic.go:334] "Generic (PLEG): container finished" podID="ec09c357-2496-458f-8c66-3acb727c58bd" containerID="f0ebfab4703ae95647b3a8d1bc33fe4578ab620cc4a90a55792fd31e98c6568d" exitCode=0 Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.583083 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" event={"ID":"ec09c357-2496-458f-8c66-3acb727c58bd","Type":"ContainerDied","Data":"f0ebfab4703ae95647b3a8d1bc33fe4578ab620cc4a90a55792fd31e98c6568d"} Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587262 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7mvq2_71825513-a9cf-4528-962f-b0c05006bdcd/console/0.log" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587340 5000 generic.go:334] "Generic (PLEG): container finished" podID="71825513-a9cf-4528-962f-b0c05006bdcd" containerID="f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981" exitCode=2 Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587382 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7mvq2" event={"ID":"71825513-a9cf-4528-962f-b0c05006bdcd","Type":"ContainerDied","Data":"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981"} Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587413 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7mvq2" event={"ID":"71825513-a9cf-4528-962f-b0c05006bdcd","Type":"ContainerDied","Data":"868c2cfed8f1ad98c18fc1def29e08dbeec7fb8c51b99b7cc4317c6c9b2380f2"} Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587443 5000 scope.go:117] "RemoveContainer" containerID="f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.587602 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7mvq2" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.613674 5000 scope.go:117] "RemoveContainer" containerID="f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981" Jan 05 21:45:37 crc kubenswrapper[5000]: E0105 21:45:37.614613 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981\": container with ID starting with f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981 not found: ID does not exist" containerID="f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.614663 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981"} err="failed to get container status \"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981\": rpc error: code = NotFound desc = could not find container \"f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981\": container with ID starting with f34e6b41f7c8b70fa4817b29972f46e5ff371cdb5d35b0a491ceef4bd91a8981 not found: ID does not exist" Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.626610 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:45:37 crc kubenswrapper[5000]: I0105 21:45:37.634779 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7mvq2"] Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.833722 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.983330 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle\") pod \"ec09c357-2496-458f-8c66-3acb727c58bd\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.983407 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7fkq\" (UniqueName: \"kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq\") pod \"ec09c357-2496-458f-8c66-3acb727c58bd\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.983459 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util\") pod \"ec09c357-2496-458f-8c66-3acb727c58bd\" (UID: \"ec09c357-2496-458f-8c66-3acb727c58bd\") " Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.984422 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle" (OuterVolumeSpecName: "bundle") pod "ec09c357-2496-458f-8c66-3acb727c58bd" (UID: "ec09c357-2496-458f-8c66-3acb727c58bd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:45:38 crc kubenswrapper[5000]: I0105 21:45:38.989051 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq" (OuterVolumeSpecName: "kube-api-access-z7fkq") pod "ec09c357-2496-458f-8c66-3acb727c58bd" (UID: "ec09c357-2496-458f-8c66-3acb727c58bd"). InnerVolumeSpecName "kube-api-access-z7fkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.003173 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util" (OuterVolumeSpecName: "util") pod "ec09c357-2496-458f-8c66-3acb727c58bd" (UID: "ec09c357-2496-458f-8c66-3acb727c58bd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.085341 5000 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-util\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.085407 5000 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ec09c357-2496-458f-8c66-3acb727c58bd-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.085422 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7fkq\" (UniqueName: \"kubernetes.io/projected/ec09c357-2496-458f-8c66-3acb727c58bd-kube-api-access-z7fkq\") on node \"crc\" DevicePath \"\"" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.335373 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" path="/var/lib/kubelet/pods/71825513-a9cf-4528-962f-b0c05006bdcd/volumes" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.601800 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" event={"ID":"ec09c357-2496-458f-8c66-3acb727c58bd","Type":"ContainerDied","Data":"f2fa10038dcd0d91d66221fbe334fde61a2636873bc81918141edf62fbb73dda"} Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.602101 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2fa10038dcd0d91d66221fbe334fde61a2636873bc81918141edf62fbb73dda" Jan 05 21:45:39 crc kubenswrapper[5000]: I0105 21:45:39.601909 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.200681 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw"] Jan 05 21:45:48 crc kubenswrapper[5000]: E0105 21:45:48.201511 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="pull" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201526 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="pull" Jan 05 21:45:48 crc kubenswrapper[5000]: E0105 21:45:48.201554 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="extract" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201563 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="extract" Jan 05 21:45:48 crc kubenswrapper[5000]: E0105 21:45:48.201576 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="util" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201584 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="util" Jan 05 21:45:48 crc kubenswrapper[5000]: E0105 21:45:48.201597 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201606 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201739 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec09c357-2496-458f-8c66-3acb727c58bd" containerName="extract" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.201754 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="71825513-a9cf-4528-962f-b0c05006bdcd" containerName="console" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.202289 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.205391 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gl6tm" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.205393 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.205451 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.205591 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.205630 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.216476 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw"] Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.291041 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-webhook-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.291105 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-apiservice-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.291167 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tlg\" (UniqueName: \"kubernetes.io/projected/61add664-ba89-4308-a9bc-fedeb78aa01d-kube-api-access-n5tlg\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.393053 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5tlg\" (UniqueName: \"kubernetes.io/projected/61add664-ba89-4308-a9bc-fedeb78aa01d-kube-api-access-n5tlg\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.393195 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-webhook-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.393225 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-apiservice-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.399922 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-apiservice-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.403169 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61add664-ba89-4308-a9bc-fedeb78aa01d-webhook-cert\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.419175 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5tlg\" (UniqueName: \"kubernetes.io/projected/61add664-ba89-4308-a9bc-fedeb78aa01d-kube-api-access-n5tlg\") pod \"metallb-operator-controller-manager-5786b66bf7-nhsgw\" (UID: \"61add664-ba89-4308-a9bc-fedeb78aa01d\") " pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.519582 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.744308 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt"] Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.745477 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.751981 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lfsll" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.752158 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.752328 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.760022 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt"] Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.822536 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw"] Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.899441 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-webhook-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.899834 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvgsq\" (UniqueName: \"kubernetes.io/projected/edb7d669-1a88-412b-8629-ef80169998dd-kube-api-access-qvgsq\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:48 crc kubenswrapper[5000]: I0105 21:45:48.899961 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-apiservice-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.001551 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-webhook-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.001612 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvgsq\" (UniqueName: \"kubernetes.io/projected/edb7d669-1a88-412b-8629-ef80169998dd-kube-api-access-qvgsq\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.001657 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-apiservice-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.006554 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-webhook-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.006603 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb7d669-1a88-412b-8629-ef80169998dd-apiservice-cert\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.017329 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvgsq\" (UniqueName: \"kubernetes.io/projected/edb7d669-1a88-412b-8629-ef80169998dd-kube-api-access-qvgsq\") pod \"metallb-operator-webhook-server-749c9dfbcd-wjtpt\" (UID: \"edb7d669-1a88-412b-8629-ef80169998dd\") " pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.069561 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.288700 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt"] Jan 05 21:45:49 crc kubenswrapper[5000]: W0105 21:45:49.297718 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedb7d669_1a88_412b_8629_ef80169998dd.slice/crio-890a9f7778f39d2c3aacfb134e214710b2498e55966884c34e95fa8cb7bdc172 WatchSource:0}: Error finding container 890a9f7778f39d2c3aacfb134e214710b2498e55966884c34e95fa8cb7bdc172: Status 404 returned error can't find the container with id 890a9f7778f39d2c3aacfb134e214710b2498e55966884c34e95fa8cb7bdc172 Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.651811 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" event={"ID":"61add664-ba89-4308-a9bc-fedeb78aa01d","Type":"ContainerStarted","Data":"80418d24ddec110c04e697e40698743b947c5eb569e26b3b4a0acbfecb8cf459"} Jan 05 21:45:49 crc kubenswrapper[5000]: I0105 21:45:49.654153 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" event={"ID":"edb7d669-1a88-412b-8629-ef80169998dd","Type":"ContainerStarted","Data":"890a9f7778f39d2c3aacfb134e214710b2498e55966884c34e95fa8cb7bdc172"} Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.681335 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" event={"ID":"61add664-ba89-4308-a9bc-fedeb78aa01d","Type":"ContainerStarted","Data":"fd715dc9b23c8ccb8564e49b1022a242265eb51a4f0ef0b7aa82db197babd46b"} Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.682029 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.683762 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" event={"ID":"edb7d669-1a88-412b-8629-ef80169998dd","Type":"ContainerStarted","Data":"a6bebc09bdec3bcd2fd165c3071878a78fd7cba4cf80edb69bc93d2a432b4385"} Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.684043 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.703734 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" podStartSLOduration=1.309500896 podStartE2EDuration="5.703717264s" podCreationTimestamp="2026-01-05 21:45:48 +0000 UTC" firstStartedPulling="2026-01-05 21:45:48.832870767 +0000 UTC m=+703.789073236" lastFinishedPulling="2026-01-05 21:45:53.227087135 +0000 UTC m=+708.183289604" observedRunningTime="2026-01-05 21:45:53.700294106 +0000 UTC m=+708.656496595" watchObservedRunningTime="2026-01-05 21:45:53.703717264 +0000 UTC m=+708.659919723" Jan 05 21:45:53 crc kubenswrapper[5000]: I0105 21:45:53.720366 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" podStartSLOduration=1.767836622 podStartE2EDuration="5.720348239s" podCreationTimestamp="2026-01-05 21:45:48 +0000 UTC" firstStartedPulling="2026-01-05 21:45:49.300722705 +0000 UTC m=+704.256925164" lastFinishedPulling="2026-01-05 21:45:53.253234312 +0000 UTC m=+708.209436781" observedRunningTime="2026-01-05 21:45:53.718753573 +0000 UTC m=+708.674956042" watchObservedRunningTime="2026-01-05 21:45:53.720348239 +0000 UTC m=+708.676550708" Jan 05 21:46:09 crc kubenswrapper[5000]: I0105 21:46:09.092143 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-749c9dfbcd-wjtpt" Jan 05 21:46:23 crc kubenswrapper[5000]: I0105 21:46:23.099156 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:46:23 crc kubenswrapper[5000]: I0105 21:46:23.099711 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:46:28 crc kubenswrapper[5000]: I0105 21:46:28.522950 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5786b66bf7-nhsgw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.317624 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xdjxg"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.320737 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.326291 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.328560 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.331714 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.332454 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.340184 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343304 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-reloader\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343345 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-sockets\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343398 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343421 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5td\" (UniqueName: \"kubernetes.io/projected/2c49dab8-fe42-472c-96d4-5bb565f9042b-kube-api-access-dk5td\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343459 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-startup\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343477 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343495 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics-certs\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343557 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxgnf\" (UniqueName: \"kubernetes.io/projected/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-kube-api-access-dxgnf\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.343657 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-conf\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.349002 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.352865 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fht7g" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.417532 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7cjvw"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.418460 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.422087 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sbr8s" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.422179 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.422276 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.422361 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445145 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxgnf\" (UniqueName: \"kubernetes.io/projected/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-kube-api-access-dxgnf\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445229 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-conf\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445267 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445301 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b49f39fb-cf2e-4bae-aefd-e476b4155444-metallb-excludel2\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445326 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-reloader\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445493 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445518 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-sockets\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445558 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445590 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7rs\" (UniqueName: \"kubernetes.io/projected/b49f39fb-cf2e-4bae-aefd-e476b4155444-kube-api-access-jw7rs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445616 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5td\" (UniqueName: \"kubernetes.io/projected/2c49dab8-fe42-472c-96d4-5bb565f9042b-kube-api-access-dk5td\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445641 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-startup\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445662 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445687 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics-certs\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.445915 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-conf\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.446029 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-reloader\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.446132 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.446395 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-sockets\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.446662 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-fvbvp"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.446822 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c49dab8-fe42-472c-96d4-5bb565f9042b-frr-startup\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.447812 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.449607 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.450633 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-fvbvp"] Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.454695 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c49dab8-fe42-472c-96d4-5bb565f9042b-metrics-certs\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.454707 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.469574 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxgnf\" (UniqueName: \"kubernetes.io/projected/468e8ed3-60c2-4cf4-8c3e-be1d5e91674f-kube-api-access-dxgnf\") pod \"frr-k8s-webhook-server-7784b6fcf-ql6m5\" (UID: \"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.477524 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5td\" (UniqueName: \"kubernetes.io/projected/2c49dab8-fe42-472c-96d4-5bb565f9042b-kube-api-access-dk5td\") pod \"frr-k8s-xdjxg\" (UID: \"2c49dab8-fe42-472c-96d4-5bb565f9042b\") " pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546593 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546665 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b49f39fb-cf2e-4bae-aefd-e476b4155444-metallb-excludel2\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546694 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546728 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-cert\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.546744 5000 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546774 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw7rs\" (UniqueName: \"kubernetes.io/projected/b49f39fb-cf2e-4bae-aefd-e476b4155444-kube-api-access-jw7rs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.546814 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist podName:b49f39fb-cf2e-4bae-aefd-e476b4155444 nodeName:}" failed. No retries permitted until 2026-01-05 21:46:30.046791438 +0000 UTC m=+745.002993907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist") pod "speaker-7cjvw" (UID: "b49f39fb-cf2e-4bae-aefd-e476b4155444") : secret "metallb-memberlist" not found Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.546835 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.546978 5000 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.547049 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs podName:b49f39fb-cf2e-4bae-aefd-e476b4155444 nodeName:}" failed. No retries permitted until 2026-01-05 21:46:30.047037785 +0000 UTC m=+745.003240254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs") pod "speaker-7cjvw" (UID: "b49f39fb-cf2e-4bae-aefd-e476b4155444") : secret "speaker-certs-secret" not found Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.547103 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gn6g\" (UniqueName: \"kubernetes.io/projected/768d8155-0383-40d9-993e-fe7a60a3b020-kube-api-access-9gn6g\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.547485 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b49f39fb-cf2e-4bae-aefd-e476b4155444-metallb-excludel2\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.561923 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw7rs\" (UniqueName: \"kubernetes.io/projected/b49f39fb-cf2e-4bae-aefd-e476b4155444-kube-api-access-jw7rs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.640366 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.648644 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-cert\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.648712 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.648740 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gn6g\" (UniqueName: \"kubernetes.io/projected/768d8155-0383-40d9-993e-fe7a60a3b020-kube-api-access-9gn6g\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.649125 5000 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 05 21:46:29 crc kubenswrapper[5000]: E0105 21:46:29.649173 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs podName:768d8155-0383-40d9-993e-fe7a60a3b020 nodeName:}" failed. No retries permitted until 2026-01-05 21:46:30.149159451 +0000 UTC m=+745.105361920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs") pod "controller-5bddd4b946-fvbvp" (UID: "768d8155-0383-40d9-993e-fe7a60a3b020") : secret "controller-certs-secret" not found Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.652715 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.652787 5000 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.662176 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-cert\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:29 crc kubenswrapper[5000]: I0105 21:46:29.670741 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gn6g\" (UniqueName: \"kubernetes.io/projected/768d8155-0383-40d9-993e-fe7a60a3b020-kube-api-access-9gn6g\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.028686 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"85f17bec6d063ab64fc38383e2c9eea04b431f2fd7e178a6529473352346e37b"} Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.052638 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.052695 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:30 crc kubenswrapper[5000]: E0105 21:46:30.052813 5000 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 05 21:46:30 crc kubenswrapper[5000]: E0105 21:46:30.052883 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist podName:b49f39fb-cf2e-4bae-aefd-e476b4155444 nodeName:}" failed. No retries permitted until 2026-01-05 21:46:31.052866646 +0000 UTC m=+746.009069115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist") pod "speaker-7cjvw" (UID: "b49f39fb-cf2e-4bae-aefd-e476b4155444") : secret "metallb-memberlist" not found Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.060581 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-metrics-certs\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.154320 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.160087 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/768d8155-0383-40d9-993e-fe7a60a3b020-metrics-certs\") pod \"controller-5bddd4b946-fvbvp\" (UID: \"768d8155-0383-40d9-993e-fe7a60a3b020\") " pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.197781 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5"] Jan 05 21:46:30 crc kubenswrapper[5000]: W0105 21:46:30.204480 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod468e8ed3_60c2_4cf4_8c3e_be1d5e91674f.slice/crio-14824b4ba5f92ddf0b6fe95035c256f6788e263b4575835c50b43481fbf47b3b WatchSource:0}: Error finding container 14824b4ba5f92ddf0b6fe95035c256f6788e263b4575835c50b43481fbf47b3b: Status 404 returned error can't find the container with id 14824b4ba5f92ddf0b6fe95035c256f6788e263b4575835c50b43481fbf47b3b Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.411644 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:30 crc kubenswrapper[5000]: I0105 21:46:30.628086 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-fvbvp"] Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.042157 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-fvbvp" event={"ID":"768d8155-0383-40d9-993e-fe7a60a3b020","Type":"ContainerStarted","Data":"0eeef7cecf58d83f792d492b2f0eb3615b037473717708fd3ed196a45fb04fa6"} Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.042198 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-fvbvp" event={"ID":"768d8155-0383-40d9-993e-fe7a60a3b020","Type":"ContainerStarted","Data":"432f1b2da79b8303ef0f1c7f6b856f14ebf3afa9ed9e47b93157c7e325c84e6f"} Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.042209 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-fvbvp" event={"ID":"768d8155-0383-40d9-993e-fe7a60a3b020","Type":"ContainerStarted","Data":"a5d07dfc1f3f5c838da4872bd945e09f4b2b2a4730e25f82956f994c1929a35a"} Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.043521 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.045321 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" event={"ID":"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f","Type":"ContainerStarted","Data":"14824b4ba5f92ddf0b6fe95035c256f6788e263b4575835c50b43481fbf47b3b"} Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.064841 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-fvbvp" podStartSLOduration=2.064819187 podStartE2EDuration="2.064819187s" podCreationTimestamp="2026-01-05 21:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:46:31.063695314 +0000 UTC m=+746.019897773" watchObservedRunningTime="2026-01-05 21:46:31.064819187 +0000 UTC m=+746.021021666" Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.071542 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.077831 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b49f39fb-cf2e-4bae-aefd-e476b4155444-memberlist\") pod \"speaker-7cjvw\" (UID: \"b49f39fb-cf2e-4bae-aefd-e476b4155444\") " pod="metallb-system/speaker-7cjvw" Jan 05 21:46:31 crc kubenswrapper[5000]: I0105 21:46:31.234728 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7cjvw" Jan 05 21:46:31 crc kubenswrapper[5000]: W0105 21:46:31.274119 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb49f39fb_cf2e_4bae_aefd_e476b4155444.slice/crio-b7a4d5b2ac3b5cf6933b9d12acebc34bd87540662a8714ac1c4d99cb9f2dd253 WatchSource:0}: Error finding container b7a4d5b2ac3b5cf6933b9d12acebc34bd87540662a8714ac1c4d99cb9f2dd253: Status 404 returned error can't find the container with id b7a4d5b2ac3b5cf6933b9d12acebc34bd87540662a8714ac1c4d99cb9f2dd253 Jan 05 21:46:32 crc kubenswrapper[5000]: I0105 21:46:32.058501 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7cjvw" event={"ID":"b49f39fb-cf2e-4bae-aefd-e476b4155444","Type":"ContainerStarted","Data":"6400e143c46a09af344701ee9901d41f0948e37ee8920c4139a68a7f31b48d63"} Jan 05 21:46:32 crc kubenswrapper[5000]: I0105 21:46:32.058803 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7cjvw" event={"ID":"b49f39fb-cf2e-4bae-aefd-e476b4155444","Type":"ContainerStarted","Data":"c143e1b8a1835e9d123a8802861071bf8d055ac659b588b2f7c01f0ab7d34ba2"} Jan 05 21:46:32 crc kubenswrapper[5000]: I0105 21:46:32.058815 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7cjvw" event={"ID":"b49f39fb-cf2e-4bae-aefd-e476b4155444","Type":"ContainerStarted","Data":"b7a4d5b2ac3b5cf6933b9d12acebc34bd87540662a8714ac1c4d99cb9f2dd253"} Jan 05 21:46:32 crc kubenswrapper[5000]: I0105 21:46:32.059071 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7cjvw" Jan 05 21:46:32 crc kubenswrapper[5000]: I0105 21:46:32.089529 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7cjvw" podStartSLOduration=3.08950745 podStartE2EDuration="3.08950745s" podCreationTimestamp="2026-01-05 21:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:46:32.085204648 +0000 UTC m=+747.041407117" watchObservedRunningTime="2026-01-05 21:46:32.08950745 +0000 UTC m=+747.045709919" Jan 05 21:46:37 crc kubenswrapper[5000]: I0105 21:46:37.093013 5000 generic.go:334] "Generic (PLEG): container finished" podID="2c49dab8-fe42-472c-96d4-5bb565f9042b" containerID="e19dc6a40f626b73e6d3cf7f6c9e9468bc275feff7b094585ee100728a350b7a" exitCode=0 Jan 05 21:46:37 crc kubenswrapper[5000]: I0105 21:46:37.093570 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerDied","Data":"e19dc6a40f626b73e6d3cf7f6c9e9468bc275feff7b094585ee100728a350b7a"} Jan 05 21:46:37 crc kubenswrapper[5000]: I0105 21:46:37.096496 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" event={"ID":"468e8ed3-60c2-4cf4-8c3e-be1d5e91674f","Type":"ContainerStarted","Data":"799dee940cad9a436593293398f1ac4126cc948e681a684a51d8cb80238c1e19"} Jan 05 21:46:37 crc kubenswrapper[5000]: I0105 21:46:37.096639 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:37 crc kubenswrapper[5000]: I0105 21:46:37.136827 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" podStartSLOduration=1.809213489 podStartE2EDuration="8.136809294s" podCreationTimestamp="2026-01-05 21:46:29 +0000 UTC" firstStartedPulling="2026-01-05 21:46:30.206848202 +0000 UTC m=+745.163050671" lastFinishedPulling="2026-01-05 21:46:36.534444007 +0000 UTC m=+751.490646476" observedRunningTime="2026-01-05 21:46:37.132200802 +0000 UTC m=+752.088403281" watchObservedRunningTime="2026-01-05 21:46:37.136809294 +0000 UTC m=+752.093011783" Jan 05 21:46:38 crc kubenswrapper[5000]: I0105 21:46:38.105444 5000 generic.go:334] "Generic (PLEG): container finished" podID="2c49dab8-fe42-472c-96d4-5bb565f9042b" containerID="e1df32bf49d9047e068b206d442eb9a7f94bc7595adb7f67bb528e0c871c5126" exitCode=0 Jan 05 21:46:38 crc kubenswrapper[5000]: I0105 21:46:38.105549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerDied","Data":"e1df32bf49d9047e068b206d442eb9a7f94bc7595adb7f67bb528e0c871c5126"} Jan 05 21:46:39 crc kubenswrapper[5000]: I0105 21:46:39.117647 5000 generic.go:334] "Generic (PLEG): container finished" podID="2c49dab8-fe42-472c-96d4-5bb565f9042b" containerID="853ab09cf6064db9a78314eb7afa3dd13a0b26441d9506e3b03dedac56517c79" exitCode=0 Jan 05 21:46:39 crc kubenswrapper[5000]: I0105 21:46:39.117744 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerDied","Data":"853ab09cf6064db9a78314eb7afa3dd13a0b26441d9506e3b03dedac56517c79"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126437 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"f3f567608fcbd09f9bcd523856026c7a7b5bce7c4c4b439b6f9f6215ff3d1574"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126745 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126761 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"5db383f9dd7cdfbf24693e382065febeeb1e4b4bd4cd02102045b71dff908b0d"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126774 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"eb826067217a47143fedea25408cc93ffec52bef559661b179fddfb21bd5cd8c"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"f1a2a8e316ae909d34e7c205c59d2407e3ff9c59def08de5496e81a339a7c85e"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126796 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"9d1b4eb70394b89c3046bc76c0c41e8feae4f7676f506ed396a4b5a72dbad431"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.126806 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xdjxg" event={"ID":"2c49dab8-fe42-472c-96d4-5bb565f9042b","Type":"ContainerStarted","Data":"7ff6a253b8672aed4cfe52a59614f3f88826992ebc3118ca6519a600656832f2"} Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.154129 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xdjxg" podStartSLOduration=4.465330138 podStartE2EDuration="11.154105973s" podCreationTimestamp="2026-01-05 21:46:29 +0000 UTC" firstStartedPulling="2026-01-05 21:46:29.849583343 +0000 UTC m=+744.805785812" lastFinishedPulling="2026-01-05 21:46:36.538359178 +0000 UTC m=+751.494561647" observedRunningTime="2026-01-05 21:46:40.153613709 +0000 UTC m=+755.109816208" watchObservedRunningTime="2026-01-05 21:46:40.154105973 +0000 UTC m=+755.110308442" Jan 05 21:46:40 crc kubenswrapper[5000]: I0105 21:46:40.416226 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-fvbvp" Jan 05 21:46:41 crc kubenswrapper[5000]: I0105 21:46:41.238656 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7cjvw" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.532013 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.533016 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.545023 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.546143 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.546354 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.546805 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-52cdl" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.641366 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.680356 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.680937 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbmn\" (UniqueName: \"kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn\") pod \"openstack-operator-index-j4mjj\" (UID: \"dca7c4af-7c53-43a8-9295-08d0e3e7d6be\") " pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.782041 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shbmn\" (UniqueName: \"kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn\") pod \"openstack-operator-index-j4mjj\" (UID: \"dca7c4af-7c53-43a8-9295-08d0e3e7d6be\") " pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.804979 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shbmn\" (UniqueName: \"kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn\") pod \"openstack-operator-index-j4mjj\" (UID: \"dca7c4af-7c53-43a8-9295-08d0e3e7d6be\") " pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.857078 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:44 crc kubenswrapper[5000]: I0105 21:46:44.932357 5000 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 05 21:46:45 crc kubenswrapper[5000]: I0105 21:46:45.100468 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:45 crc kubenswrapper[5000]: I0105 21:46:45.166046 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j4mjj" event={"ID":"dca7c4af-7c53-43a8-9295-08d0e3e7d6be","Type":"ContainerStarted","Data":"3cfdaa7cc8272b65eacd1a722d726aa3a9b2efa553b86874e76ca2d582be88fe"} Jan 05 21:46:47 crc kubenswrapper[5000]: I0105 21:46:47.909155 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.184634 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j4mjj" event={"ID":"dca7c4af-7c53-43a8-9295-08d0e3e7d6be","Type":"ContainerStarted","Data":"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80"} Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.508302 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-j4mjj" podStartSLOduration=1.9431052210000002 podStartE2EDuration="4.508280754s" podCreationTimestamp="2026-01-05 21:46:44 +0000 UTC" firstStartedPulling="2026-01-05 21:46:45.105167389 +0000 UTC m=+760.061369858" lastFinishedPulling="2026-01-05 21:46:47.670342912 +0000 UTC m=+762.626545391" observedRunningTime="2026-01-05 21:46:48.201941418 +0000 UTC m=+763.158143947" watchObservedRunningTime="2026-01-05 21:46:48.508280754 +0000 UTC m=+763.464483223" Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.513405 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jm56r"] Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.514123 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.531726 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jm56r"] Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.563963 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzh4v\" (UniqueName: \"kubernetes.io/projected/3dfe8a9b-7998-4246-b195-b9a2ab968946-kube-api-access-bzh4v\") pod \"openstack-operator-index-jm56r\" (UID: \"3dfe8a9b-7998-4246-b195-b9a2ab968946\") " pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.665232 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzh4v\" (UniqueName: \"kubernetes.io/projected/3dfe8a9b-7998-4246-b195-b9a2ab968946-kube-api-access-bzh4v\") pod \"openstack-operator-index-jm56r\" (UID: \"3dfe8a9b-7998-4246-b195-b9a2ab968946\") " pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.683487 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzh4v\" (UniqueName: \"kubernetes.io/projected/3dfe8a9b-7998-4246-b195-b9a2ab968946-kube-api-access-bzh4v\") pod \"openstack-operator-index-jm56r\" (UID: \"3dfe8a9b-7998-4246-b195-b9a2ab968946\") " pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:48 crc kubenswrapper[5000]: I0105 21:46:48.875796 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.192617 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-j4mjj" podUID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" containerName="registry-server" containerID="cri-o://1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80" gracePeriod=2 Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.261852 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jm56r"] Jan 05 21:46:49 crc kubenswrapper[5000]: W0105 21:46:49.266548 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dfe8a9b_7998_4246_b195_b9a2ab968946.slice/crio-87c99e58bc1badeb23fed272136feb4dd3a0175529e559580b6fb1c61f0d3c77 WatchSource:0}: Error finding container 87c99e58bc1badeb23fed272136feb4dd3a0175529e559580b6fb1c61f0d3c77: Status 404 returned error can't find the container with id 87c99e58bc1badeb23fed272136feb4dd3a0175529e559580b6fb1c61f0d3c77 Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.524776 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.576591 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shbmn\" (UniqueName: \"kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn\") pod \"dca7c4af-7c53-43a8-9295-08d0e3e7d6be\" (UID: \"dca7c4af-7c53-43a8-9295-08d0e3e7d6be\") " Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.582063 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn" (OuterVolumeSpecName: "kube-api-access-shbmn") pod "dca7c4af-7c53-43a8-9295-08d0e3e7d6be" (UID: "dca7c4af-7c53-43a8-9295-08d0e3e7d6be"). InnerVolumeSpecName "kube-api-access-shbmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.644263 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xdjxg" Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.659067 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ql6m5" Jan 05 21:46:49 crc kubenswrapper[5000]: I0105 21:46:49.677731 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shbmn\" (UniqueName: \"kubernetes.io/projected/dca7c4af-7c53-43a8-9295-08d0e3e7d6be-kube-api-access-shbmn\") on node \"crc\" DevicePath \"\"" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.203205 5000 generic.go:334] "Generic (PLEG): container finished" podID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" containerID="1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80" exitCode=0 Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.203270 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j4mjj" event={"ID":"dca7c4af-7c53-43a8-9295-08d0e3e7d6be","Type":"ContainerDied","Data":"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80"} Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.203332 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-j4mjj" event={"ID":"dca7c4af-7c53-43a8-9295-08d0e3e7d6be","Type":"ContainerDied","Data":"3cfdaa7cc8272b65eacd1a722d726aa3a9b2efa553b86874e76ca2d582be88fe"} Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.203364 5000 scope.go:117] "RemoveContainer" containerID="1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.205085 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-j4mjj" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.206430 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jm56r" event={"ID":"3dfe8a9b-7998-4246-b195-b9a2ab968946","Type":"ContainerStarted","Data":"b1396913973e7482254aa74c422431f096758f6516236a5f12f3e8b4d1faffc8"} Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.206493 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jm56r" event={"ID":"3dfe8a9b-7998-4246-b195-b9a2ab968946","Type":"ContainerStarted","Data":"87c99e58bc1badeb23fed272136feb4dd3a0175529e559580b6fb1c61f0d3c77"} Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.235177 5000 scope.go:117] "RemoveContainer" containerID="1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80" Jan 05 21:46:50 crc kubenswrapper[5000]: E0105 21:46:50.235745 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80\": container with ID starting with 1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80 not found: ID does not exist" containerID="1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.235810 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80"} err="failed to get container status \"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80\": rpc error: code = NotFound desc = could not find container \"1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80\": container with ID starting with 1323994bd0d18ec889d19e15cb59114737f7f6bbd00b31b004d5595bf1357d80 not found: ID does not exist" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.236546 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jm56r" podStartSLOduration=2.179015062 podStartE2EDuration="2.236530914s" podCreationTimestamp="2026-01-05 21:46:48 +0000 UTC" firstStartedPulling="2026-01-05 21:46:49.277629618 +0000 UTC m=+764.233832097" lastFinishedPulling="2026-01-05 21:46:49.33514549 +0000 UTC m=+764.291347949" observedRunningTime="2026-01-05 21:46:50.229948186 +0000 UTC m=+765.186150645" watchObservedRunningTime="2026-01-05 21:46:50.236530914 +0000 UTC m=+765.192733383" Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.246208 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:50 crc kubenswrapper[5000]: I0105 21:46:50.256367 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-j4mjj"] Jan 05 21:46:51 crc kubenswrapper[5000]: I0105 21:46:51.332543 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" path="/var/lib/kubelet/pods/dca7c4af-7c53-43a8-9295-08d0e3e7d6be/volumes" Jan 05 21:46:53 crc kubenswrapper[5000]: I0105 21:46:53.098852 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:46:53 crc kubenswrapper[5000]: I0105 21:46:53.098961 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:46:58 crc kubenswrapper[5000]: I0105 21:46:58.876703 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:58 crc kubenswrapper[5000]: I0105 21:46:58.877229 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:58 crc kubenswrapper[5000]: I0105 21:46:58.902814 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:46:59 crc kubenswrapper[5000]: I0105 21:46:59.276571 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jm56r" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.732454 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6"] Jan 05 21:47:06 crc kubenswrapper[5000]: E0105 21:47:06.733291 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" containerName="registry-server" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.733306 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" containerName="registry-server" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.733444 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca7c4af-7c53-43a8-9295-08d0e3e7d6be" containerName="registry-server" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.734400 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.736809 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jxr6j" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.741773 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6"] Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.786435 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs5g\" (UniqueName: \"kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.786520 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.786665 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.887865 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs5g\" (UniqueName: \"kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.887955 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.888020 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.888580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.888687 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:06 crc kubenswrapper[5000]: I0105 21:47:06.907117 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs5g\" (UniqueName: \"kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g\") pod \"21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:07 crc kubenswrapper[5000]: I0105 21:47:07.058379 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:07 crc kubenswrapper[5000]: I0105 21:47:07.286138 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6"] Jan 05 21:47:07 crc kubenswrapper[5000]: I0105 21:47:07.314150 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" event={"ID":"2263ae7c-d1ad-4e51-ac66-a254cf554cd3","Type":"ContainerStarted","Data":"c1c283d5dc7b77f600f8ee11e093b46bee31b66deae97698ef67c5d81df19254"} Jan 05 21:47:08 crc kubenswrapper[5000]: I0105 21:47:08.327656 5000 generic.go:334] "Generic (PLEG): container finished" podID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerID="1408c743a95e28cc1f347e0effb1af2eec5de3f3d1cd59a363ef9ed79af2e359" exitCode=0 Jan 05 21:47:08 crc kubenswrapper[5000]: I0105 21:47:08.327937 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" event={"ID":"2263ae7c-d1ad-4e51-ac66-a254cf554cd3","Type":"ContainerDied","Data":"1408c743a95e28cc1f347e0effb1af2eec5de3f3d1cd59a363ef9ed79af2e359"} Jan 05 21:47:09 crc kubenswrapper[5000]: I0105 21:47:09.334078 5000 generic.go:334] "Generic (PLEG): container finished" podID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerID="a747244b5a29a2228829aa826e261593a4e743de8556dcbad3abdf9ae2a4c446" exitCode=0 Jan 05 21:47:09 crc kubenswrapper[5000]: I0105 21:47:09.334184 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" event={"ID":"2263ae7c-d1ad-4e51-ac66-a254cf554cd3","Type":"ContainerDied","Data":"a747244b5a29a2228829aa826e261593a4e743de8556dcbad3abdf9ae2a4c446"} Jan 05 21:47:10 crc kubenswrapper[5000]: I0105 21:47:10.342015 5000 generic.go:334] "Generic (PLEG): container finished" podID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerID="47c2d9548681447d5ba3183f00eed46252fb0a78283e1ab6aac5285f5365dc16" exitCode=0 Jan 05 21:47:10 crc kubenswrapper[5000]: I0105 21:47:10.342064 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" event={"ID":"2263ae7c-d1ad-4e51-ac66-a254cf554cd3","Type":"ContainerDied","Data":"47c2d9548681447d5ba3183f00eed46252fb0a78283e1ab6aac5285f5365dc16"} Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.586473 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.650271 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util\") pod \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.650333 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle\") pod \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.650924 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle" (OuterVolumeSpecName: "bundle") pod "2263ae7c-d1ad-4e51-ac66-a254cf554cd3" (UID: "2263ae7c-d1ad-4e51-ac66-a254cf554cd3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.650949 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvs5g\" (UniqueName: \"kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g\") pod \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\" (UID: \"2263ae7c-d1ad-4e51-ac66-a254cf554cd3\") " Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.651192 5000 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.656495 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g" (OuterVolumeSpecName: "kube-api-access-fvs5g") pod "2263ae7c-d1ad-4e51-ac66-a254cf554cd3" (UID: "2263ae7c-d1ad-4e51-ac66-a254cf554cd3"). InnerVolumeSpecName "kube-api-access-fvs5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.663143 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util" (OuterVolumeSpecName: "util") pod "2263ae7c-d1ad-4e51-ac66-a254cf554cd3" (UID: "2263ae7c-d1ad-4e51-ac66-a254cf554cd3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.752101 5000 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-util\") on node \"crc\" DevicePath \"\"" Jan 05 21:47:11 crc kubenswrapper[5000]: I0105 21:47:11.752144 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvs5g\" (UniqueName: \"kubernetes.io/projected/2263ae7c-d1ad-4e51-ac66-a254cf554cd3-kube-api-access-fvs5g\") on node \"crc\" DevicePath \"\"" Jan 05 21:47:12 crc kubenswrapper[5000]: I0105 21:47:12.358157 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" event={"ID":"2263ae7c-d1ad-4e51-ac66-a254cf554cd3","Type":"ContainerDied","Data":"c1c283d5dc7b77f600f8ee11e093b46bee31b66deae97698ef67c5d81df19254"} Jan 05 21:47:12 crc kubenswrapper[5000]: I0105 21:47:12.358485 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c283d5dc7b77f600f8ee11e093b46bee31b66deae97698ef67c5d81df19254" Jan 05 21:47:12 crc kubenswrapper[5000]: I0105 21:47:12.358313 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.275600 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn"] Jan 05 21:47:19 crc kubenswrapper[5000]: E0105 21:47:19.276116 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="pull" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.276128 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="pull" Jan 05 21:47:19 crc kubenswrapper[5000]: E0105 21:47:19.276151 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="extract" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.276156 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="extract" Jan 05 21:47:19 crc kubenswrapper[5000]: E0105 21:47:19.276171 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="util" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.276178 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="util" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.276285 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2263ae7c-d1ad-4e51-ac66-a254cf554cd3" containerName="extract" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.276660 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.278638 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-8pwrr" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.314322 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn"] Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.355291 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sdjp\" (UniqueName: \"kubernetes.io/projected/e31709ea-50f3-4b79-9851-e6c21b82aa58-kube-api-access-5sdjp\") pod \"openstack-operator-controller-operator-59bf84b846-bghfn\" (UID: \"e31709ea-50f3-4b79-9851-e6c21b82aa58\") " pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.457120 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sdjp\" (UniqueName: \"kubernetes.io/projected/e31709ea-50f3-4b79-9851-e6c21b82aa58-kube-api-access-5sdjp\") pod \"openstack-operator-controller-operator-59bf84b846-bghfn\" (UID: \"e31709ea-50f3-4b79-9851-e6c21b82aa58\") " pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.474827 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sdjp\" (UniqueName: \"kubernetes.io/projected/e31709ea-50f3-4b79-9851-e6c21b82aa58-kube-api-access-5sdjp\") pod \"openstack-operator-controller-operator-59bf84b846-bghfn\" (UID: \"e31709ea-50f3-4b79-9851-e6c21b82aa58\") " pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.599753 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:19 crc kubenswrapper[5000]: I0105 21:47:19.812500 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn"] Jan 05 21:47:20 crc kubenswrapper[5000]: I0105 21:47:20.407036 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" event={"ID":"e31709ea-50f3-4b79-9851-e6c21b82aa58","Type":"ContainerStarted","Data":"d3728452b2407cb35c4c4fa179accf4b2f17a3585321bbd7e5798924befe9b8f"} Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.099139 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.099489 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.099537 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.100044 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.100192 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf" gracePeriod=600 Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.437928 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf" exitCode=0 Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.437982 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf"} Jan 05 21:47:23 crc kubenswrapper[5000]: I0105 21:47:23.438022 5000 scope.go:117] "RemoveContainer" containerID="5525c98bb5caf2b87bda34b84fcf1b0890fe58e7097f12bd761f68d5981ed84c" Jan 05 21:47:24 crc kubenswrapper[5000]: I0105 21:47:24.450166 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef"} Jan 05 21:47:24 crc kubenswrapper[5000]: I0105 21:47:24.452977 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" event={"ID":"e31709ea-50f3-4b79-9851-e6c21b82aa58","Type":"ContainerStarted","Data":"65c76043bc1e9db0cb7c0441dca1491fdc19e9781c5b92b49fa1eb351c4f41a0"} Jan 05 21:47:24 crc kubenswrapper[5000]: I0105 21:47:24.453194 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:47:24 crc kubenswrapper[5000]: I0105 21:47:24.502746 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" podStartSLOduration=1.730638141 podStartE2EDuration="5.502712299s" podCreationTimestamp="2026-01-05 21:47:19 +0000 UTC" firstStartedPulling="2026-01-05 21:47:19.823093891 +0000 UTC m=+794.779296370" lastFinishedPulling="2026-01-05 21:47:23.595168059 +0000 UTC m=+798.551370528" observedRunningTime="2026-01-05 21:47:24.498106647 +0000 UTC m=+799.454309126" watchObservedRunningTime="2026-01-05 21:47:24.502712299 +0000 UTC m=+799.458914768" Jan 05 21:47:29 crc kubenswrapper[5000]: I0105 21:47:29.603194 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-59bf84b846-bghfn" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.619707 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.621727 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.623118 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.623848 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.625123 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-l76z6" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.625541 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xw9kq" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.634497 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.640610 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.667029 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.668071 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.671765 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ccjss" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.683355 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.698103 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.699145 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.702653 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.703151 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9w8xr" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.723778 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.724588 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.726708 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cq59\" (UniqueName: \"kubernetes.io/projected/97262ac6-99c3-47d4-a2a4-401e945a53c7-kube-api-access-7cq59\") pod \"cinder-operator-controller-manager-78979fc445-p6wws\" (UID: \"97262ac6-99c3-47d4-a2a4-401e945a53c7\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.726785 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mhw\" (UniqueName: \"kubernetes.io/projected/2d94d179-bc23-416d-b4c7-6925b43d7131-kube-api-access-c6mhw\") pod \"designate-operator-controller-manager-66f8b87655-jsbjc\" (UID: \"2d94d179-bc23-416d-b4c7-6925b43d7131\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.726829 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5grl8\" (UniqueName: \"kubernetes.io/projected/3b7bc759-79ec-4375-848d-a4900428e360-kube-api-access-5grl8\") pod \"barbican-operator-controller-manager-f6f74d6db-mcqdp\" (UID: \"3b7bc759-79ec-4375-848d-a4900428e360\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.733514 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-flcpd" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.735905 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.736909 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.740794 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qghjl" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.766999 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.781976 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.783289 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.789248 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7zm8t" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.789404 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.796994 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.807507 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.827843 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x95vd\" (UniqueName: \"kubernetes.io/projected/c246b6eb-3f29-404c-8b9c-f96bfc9ac87d-kube-api-access-x95vd\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-rcwpw\" (UID: \"c246b6eb-3f29-404c-8b9c-f96bfc9ac87d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.828124 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchtq\" (UniqueName: \"kubernetes.io/projected/a457b96c-32bc-4fbc-80e2-3567e1fdead4-kube-api-access-jchtq\") pod \"glance-operator-controller-manager-7b549fc966-2rhpx\" (UID: \"a457b96c-32bc-4fbc-80e2-3567e1fdead4\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.828148 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cq59\" (UniqueName: \"kubernetes.io/projected/97262ac6-99c3-47d4-a2a4-401e945a53c7-kube-api-access-7cq59\") pod \"cinder-operator-controller-manager-78979fc445-p6wws\" (UID: \"97262ac6-99c3-47d4-a2a4-401e945a53c7\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.828165 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mhw\" (UniqueName: \"kubernetes.io/projected/2d94d179-bc23-416d-b4c7-6925b43d7131-kube-api-access-c6mhw\") pod \"designate-operator-controller-manager-66f8b87655-jsbjc\" (UID: \"2d94d179-bc23-416d-b4c7-6925b43d7131\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.828196 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5grl8\" (UniqueName: \"kubernetes.io/projected/3b7bc759-79ec-4375-848d-a4900428e360-kube-api-access-5grl8\") pod \"barbican-operator-controller-manager-f6f74d6db-mcqdp\" (UID: \"3b7bc759-79ec-4375-848d-a4900428e360\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.828249 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjgd\" (UniqueName: \"kubernetes.io/projected/a5f4bfce-86d7-4e99-984f-2a834fda3018-kube-api-access-jnjgd\") pod \"heat-operator-controller-manager-658dd65b86-2q8d7\" (UID: \"a5f4bfce-86d7-4e99-984f-2a834fda3018\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.844960 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.845835 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.857795 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.858545 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.862368 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9bqjj" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.877352 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rh8rx" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.877504 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mhw\" (UniqueName: \"kubernetes.io/projected/2d94d179-bc23-416d-b4c7-6925b43d7131-kube-api-access-c6mhw\") pod \"designate-operator-controller-manager-66f8b87655-jsbjc\" (UID: \"2d94d179-bc23-416d-b4c7-6925b43d7131\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.877554 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.886462 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5grl8\" (UniqueName: \"kubernetes.io/projected/3b7bc759-79ec-4375-848d-a4900428e360-kube-api-access-5grl8\") pod \"barbican-operator-controller-manager-f6f74d6db-mcqdp\" (UID: \"3b7bc759-79ec-4375-848d-a4900428e360\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.893549 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cq59\" (UniqueName: \"kubernetes.io/projected/97262ac6-99c3-47d4-a2a4-401e945a53c7-kube-api-access-7cq59\") pod \"cinder-operator-controller-manager-78979fc445-p6wws\" (UID: \"97262ac6-99c3-47d4-a2a4-401e945a53c7\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.903253 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.904072 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.914511 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7ht4r" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.930990 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jchtq\" (UniqueName: \"kubernetes.io/projected/a457b96c-32bc-4fbc-80e2-3567e1fdead4-kube-api-access-jchtq\") pod \"glance-operator-controller-manager-7b549fc966-2rhpx\" (UID: \"a457b96c-32bc-4fbc-80e2-3567e1fdead4\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931052 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931088 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ss68\" (UniqueName: \"kubernetes.io/projected/87ca26ac-b882-4e9a-8f90-27461a61453e-kube-api-access-9ss68\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931127 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67629\" (UniqueName: \"kubernetes.io/projected/7750c973-b8d1-47f3-90ed-1034a7e6c33c-kube-api-access-67629\") pod \"ironic-operator-controller-manager-f99f54bc8-m8qfg\" (UID: \"7750c973-b8d1-47f3-90ed-1034a7e6c33c\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931153 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjgd\" (UniqueName: \"kubernetes.io/projected/a5f4bfce-86d7-4e99-984f-2a834fda3018-kube-api-access-jnjgd\") pod \"heat-operator-controller-manager-658dd65b86-2q8d7\" (UID: \"a5f4bfce-86d7-4e99-984f-2a834fda3018\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931188 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x95vd\" (UniqueName: \"kubernetes.io/projected/c246b6eb-3f29-404c-8b9c-f96bfc9ac87d-kube-api-access-x95vd\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-rcwpw\" (UID: \"c246b6eb-3f29-404c-8b9c-f96bfc9ac87d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.931209 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bq7\" (UniqueName: \"kubernetes.io/projected/fe4fd66d-9294-437e-b21e-c66cf323999e-kube-api-access-n5bq7\") pod \"keystone-operator-controller-manager-568985c78-h7j5w\" (UID: \"fe4fd66d-9294-437e-b21e-c66cf323999e\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.937094 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.938102 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.946381 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pblfb" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.950780 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.951396 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.954152 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jchtq\" (UniqueName: \"kubernetes.io/projected/a457b96c-32bc-4fbc-80e2-3567e1fdead4-kube-api-access-jchtq\") pod \"glance-operator-controller-manager-7b549fc966-2rhpx\" (UID: \"a457b96c-32bc-4fbc-80e2-3567e1fdead4\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.961368 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x95vd\" (UniqueName: \"kubernetes.io/projected/c246b6eb-3f29-404c-8b9c-f96bfc9ac87d-kube-api-access-x95vd\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-rcwpw\" (UID: \"c246b6eb-3f29-404c-8b9c-f96bfc9ac87d\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.961663 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.967865 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w"] Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.972084 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjgd\" (UniqueName: \"kubernetes.io/projected/a5f4bfce-86d7-4e99-984f-2a834fda3018-kube-api-access-jnjgd\") pod \"heat-operator-controller-manager-658dd65b86-2q8d7\" (UID: \"a5f4bfce-86d7-4e99-984f-2a834fda3018\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.987362 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:06 crc kubenswrapper[5000]: I0105 21:48:06.995335 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.015965 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.016834 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.017392 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.023633 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2rlkt" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.031958 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ss68\" (UniqueName: \"kubernetes.io/projected/87ca26ac-b882-4e9a-8f90-27461a61453e-kube-api-access-9ss68\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.032000 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98cns\" (UniqueName: \"kubernetes.io/projected/450de243-6d71-4f61-836a-47028669d2b7-kube-api-access-98cns\") pod \"manila-operator-controller-manager-598945d5b8-zg96g\" (UID: \"450de243-6d71-4f61-836a-47028669d2b7\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.032034 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67629\" (UniqueName: \"kubernetes.io/projected/7750c973-b8d1-47f3-90ed-1034a7e6c33c-kube-api-access-67629\") pod \"ironic-operator-controller-manager-f99f54bc8-m8qfg\" (UID: \"7750c973-b8d1-47f3-90ed-1034a7e6c33c\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.032086 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bq7\" (UniqueName: \"kubernetes.io/projected/fe4fd66d-9294-437e-b21e-c66cf323999e-kube-api-access-n5bq7\") pod \"keystone-operator-controller-manager-568985c78-h7j5w\" (UID: \"fe4fd66d-9294-437e-b21e-c66cf323999e\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.032144 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstvk\" (UniqueName: \"kubernetes.io/projected/bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd-kube-api-access-mstvk\") pod \"mariadb-operator-controller-manager-7b88bfc995-9smz4\" (UID: \"bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.032175 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.032300 5000 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.032353 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert podName:87ca26ac-b882-4e9a-8f90-27461a61453e nodeName:}" failed. No retries permitted until 2026-01-05 21:48:07.532336781 +0000 UTC m=+842.488539250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert") pod "infra-operator-controller-manager-6d99759cf-n9mxh" (UID: "87ca26ac-b882-4e9a-8f90-27461a61453e") : secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.041989 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.059450 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.070649 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.077967 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.079069 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.093664 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67629\" (UniqueName: \"kubernetes.io/projected/7750c973-b8d1-47f3-90ed-1034a7e6c33c-kube-api-access-67629\") pod \"ironic-operator-controller-manager-f99f54bc8-m8qfg\" (UID: \"7750c973-b8d1-47f3-90ed-1034a7e6c33c\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.099821 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bq7\" (UniqueName: \"kubernetes.io/projected/fe4fd66d-9294-437e-b21e-c66cf323999e-kube-api-access-n5bq7\") pod \"keystone-operator-controller-manager-568985c78-h7j5w\" (UID: \"fe4fd66d-9294-437e-b21e-c66cf323999e\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.100743 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hwcvt" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.126331 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ss68\" (UniqueName: \"kubernetes.io/projected/87ca26ac-b882-4e9a-8f90-27461a61453e-kube-api-access-9ss68\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.149673 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.161655 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp2tn\" (UniqueName: \"kubernetes.io/projected/bb2dd57d-6d64-4048-b69b-749250d948b9-kube-api-access-gp2tn\") pod \"octavia-operator-controller-manager-68c649d9d-h5tz2\" (UID: \"bb2dd57d-6d64-4048-b69b-749250d948b9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.162082 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstvk\" (UniqueName: \"kubernetes.io/projected/bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd-kube-api-access-mstvk\") pod \"mariadb-operator-controller-manager-7b88bfc995-9smz4\" (UID: \"bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.162910 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98cns\" (UniqueName: \"kubernetes.io/projected/450de243-6d71-4f61-836a-47028669d2b7-kube-api-access-98cns\") pod \"manila-operator-controller-manager-598945d5b8-zg96g\" (UID: \"450de243-6d71-4f61-836a-47028669d2b7\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.163067 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz62s\" (UniqueName: \"kubernetes.io/projected/d60727e4-58b9-43ed-ae99-0c44cab79dc9-kube-api-access-nz62s\") pod \"neutron-operator-controller-manager-7cd87b778f-ghw2z\" (UID: \"d60727e4-58b9-43ed-ae99-0c44cab79dc9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.165759 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.191942 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dqfrk" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.266081 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.266976 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.267002 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98cns\" (UniqueName: \"kubernetes.io/projected/450de243-6d71-4f61-836a-47028669d2b7-kube-api-access-98cns\") pod \"manila-operator-controller-manager-598945d5b8-zg96g\" (UID: \"450de243-6d71-4f61-836a-47028669d2b7\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.270257 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstvk\" (UniqueName: \"kubernetes.io/projected/bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd-kube-api-access-mstvk\") pod \"mariadb-operator-controller-manager-7b88bfc995-9smz4\" (UID: \"bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.270459 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.271367 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp2tn\" (UniqueName: \"kubernetes.io/projected/bb2dd57d-6d64-4048-b69b-749250d948b9-kube-api-access-gp2tn\") pod \"octavia-operator-controller-manager-68c649d9d-h5tz2\" (UID: \"bb2dd57d-6d64-4048-b69b-749250d948b9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.271438 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6nx\" (UniqueName: \"kubernetes.io/projected/e376cad9-0c9e-423a-a1fb-b33246417cbb-kube-api-access-wf6nx\") pod \"nova-operator-controller-manager-5fbbf8b6cc-xrl9g\" (UID: \"e376cad9-0c9e-423a-a1fb-b33246417cbb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.271470 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz62s\" (UniqueName: \"kubernetes.io/projected/d60727e4-58b9-43ed-ae99-0c44cab79dc9-kube-api-access-nz62s\") pod \"neutron-operator-controller-manager-7cd87b778f-ghw2z\" (UID: \"d60727e4-58b9-43ed-ae99-0c44cab79dc9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.309837 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz62s\" (UniqueName: \"kubernetes.io/projected/d60727e4-58b9-43ed-ae99-0c44cab79dc9-kube-api-access-nz62s\") pod \"neutron-operator-controller-manager-7cd87b778f-ghw2z\" (UID: \"d60727e4-58b9-43ed-ae99-0c44cab79dc9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.331204 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp2tn\" (UniqueName: \"kubernetes.io/projected/bb2dd57d-6d64-4048-b69b-749250d948b9-kube-api-access-gp2tn\") pod \"octavia-operator-controller-manager-68c649d9d-h5tz2\" (UID: \"bb2dd57d-6d64-4048-b69b-749250d948b9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.343096 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.373898 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6nx\" (UniqueName: \"kubernetes.io/projected/e376cad9-0c9e-423a-a1fb-b33246417cbb-kube-api-access-wf6nx\") pod \"nova-operator-controller-manager-5fbbf8b6cc-xrl9g\" (UID: \"e376cad9-0c9e-423a-a1fb-b33246417cbb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.376347 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.377245 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.377296 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.378997 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.379764 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.380345 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.380727 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.385317 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g9qjq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.385516 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7dvbk" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.385615 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.386192 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5rc6j" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.386319 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.393405 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.399040 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.399858 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.403393 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-v4rrb" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.405745 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.406478 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6nx\" (UniqueName: \"kubernetes.io/projected/e376cad9-0c9e-423a-a1fb-b33246417cbb-kube-api-access-wf6nx\") pod \"nova-operator-controller-manager-5fbbf8b6cc-xrl9g\" (UID: \"e376cad9-0c9e-423a-a1fb-b33246417cbb\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.423260 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.425057 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.428437 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d9cqk" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.438940 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.444980 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.453493 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.454544 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.455573 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.463927 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kz8df" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.476679 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.553455 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.554293 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.557558 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bvfr4" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.566538 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.583304 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbvf\" (UniqueName: \"kubernetes.io/projected/7dab6b1b-c641-4e22-a689-a1dc62da7733-kube-api-access-pqbvf\") pod \"placement-operator-controller-manager-9b6f8f78c-v6nfh\" (UID: \"7dab6b1b-c641-4e22-a689-a1dc62da7733\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584491 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2pr\" (UniqueName: \"kubernetes.io/projected/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-kube-api-access-rk2pr\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584568 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxxm\" (UniqueName: \"kubernetes.io/projected/1236464f-4580-4f31-ab8b-a22d559aa8c3-kube-api-access-gmxxm\") pod \"telemetry-operator-controller-manager-68d988df55-9cd8n\" (UID: \"1236464f-4580-4f31-ab8b-a22d559aa8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584599 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584639 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cljm\" (UniqueName: \"kubernetes.io/projected/2a8023f1-b9cf-4fa2-b421-b053941d4c42-kube-api-access-9cljm\") pod \"swift-operator-controller-manager-bb586bbf4-pk7nh\" (UID: \"2a8023f1-b9cf-4fa2-b421-b053941d4c42\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584671 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlngk\" (UniqueName: \"kubernetes.io/projected/42922f7b-4e7e-4ef1-b465-936097b98929-kube-api-access-wlngk\") pod \"ovn-operator-controller-manager-bf6d4f946-lh5t8\" (UID: \"42922f7b-4e7e-4ef1-b465-936097b98929\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584722 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.584760 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8w4k\" (UniqueName: \"kubernetes.io/projected/95d67b6f-d50a-49c6-b866-9926f4b9e495-kube-api-access-c8w4k\") pod \"test-operator-controller-manager-6c866cfdcb-dzjnd\" (UID: \"95d67b6f-d50a-49c6-b866-9926f4b9e495\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.585052 5000 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.585129 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert podName:87ca26ac-b882-4e9a-8f90-27461a61453e nodeName:}" failed. No retries permitted until 2026-01-05 21:48:08.585099219 +0000 UTC m=+843.541301688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert") pod "infra-operator-controller-manager-6d99759cf-n9mxh" (UID: "87ca26ac-b882-4e9a-8f90-27461a61453e") : secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.593284 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.600073 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.630551 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.630752 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.631776 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.636332 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.636474 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.636682 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vxq56" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.637622 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.688105 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbvf\" (UniqueName: \"kubernetes.io/projected/7dab6b1b-c641-4e22-a689-a1dc62da7733-kube-api-access-pqbvf\") pod \"placement-operator-controller-manager-9b6f8f78c-v6nfh\" (UID: \"7dab6b1b-c641-4e22-a689-a1dc62da7733\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689520 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2pr\" (UniqueName: \"kubernetes.io/projected/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-kube-api-access-rk2pr\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689590 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689643 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxxm\" (UniqueName: \"kubernetes.io/projected/1236464f-4580-4f31-ab8b-a22d559aa8c3-kube-api-access-gmxxm\") pod \"telemetry-operator-controller-manager-68d988df55-9cd8n\" (UID: \"1236464f-4580-4f31-ab8b-a22d559aa8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689678 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689705 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689733 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cljm\" (UniqueName: \"kubernetes.io/projected/2a8023f1-b9cf-4fa2-b421-b053941d4c42-kube-api-access-9cljm\") pod \"swift-operator-controller-manager-bb586bbf4-pk7nh\" (UID: \"2a8023f1-b9cf-4fa2-b421-b053941d4c42\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689755 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlngk\" (UniqueName: \"kubernetes.io/projected/42922f7b-4e7e-4ef1-b465-936097b98929-kube-api-access-wlngk\") pod \"ovn-operator-controller-manager-bf6d4f946-lh5t8\" (UID: \"42922f7b-4e7e-4ef1-b465-936097b98929\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689826 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8w4k\" (UniqueName: \"kubernetes.io/projected/95d67b6f-d50a-49c6-b866-9926f4b9e495-kube-api-access-c8w4k\") pod \"test-operator-controller-manager-6c866cfdcb-dzjnd\" (UID: \"95d67b6f-d50a-49c6-b866-9926f4b9e495\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689865 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcgd\" (UniqueName: \"kubernetes.io/projected/fb31c907-60af-4a8c-a49f-977f28a18e20-kube-api-access-hwcgd\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.689908 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr949\" (UniqueName: \"kubernetes.io/projected/5830ae86-6c11-4567-8f4a-28d4e3251c07-kube-api-access-mr949\") pod \"watcher-operator-controller-manager-9dbdf6486-whzx7\" (UID: \"5830ae86-6c11-4567-8f4a-28d4e3251c07\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.690273 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.690314 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:08.190302198 +0000 UTC m=+843.146504667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.706406 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.707434 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.713797 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlngk\" (UniqueName: \"kubernetes.io/projected/42922f7b-4e7e-4ef1-b465-936097b98929-kube-api-access-wlngk\") pod \"ovn-operator-controller-manager-bf6d4f946-lh5t8\" (UID: \"42922f7b-4e7e-4ef1-b465-936097b98929\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.714444 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7lb8q" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.715114 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cljm\" (UniqueName: \"kubernetes.io/projected/2a8023f1-b9cf-4fa2-b421-b053941d4c42-kube-api-access-9cljm\") pod \"swift-operator-controller-manager-bb586bbf4-pk7nh\" (UID: \"2a8023f1-b9cf-4fa2-b421-b053941d4c42\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.715132 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbvf\" (UniqueName: \"kubernetes.io/projected/7dab6b1b-c641-4e22-a689-a1dc62da7733-kube-api-access-pqbvf\") pod \"placement-operator-controller-manager-9b6f8f78c-v6nfh\" (UID: \"7dab6b1b-c641-4e22-a689-a1dc62da7733\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.716051 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxxm\" (UniqueName: \"kubernetes.io/projected/1236464f-4580-4f31-ab8b-a22d559aa8c3-kube-api-access-gmxxm\") pod \"telemetry-operator-controller-manager-68d988df55-9cd8n\" (UID: \"1236464f-4580-4f31-ab8b-a22d559aa8c3\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.719267 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.725099 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.726030 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2pr\" (UniqueName: \"kubernetes.io/projected/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-kube-api-access-rk2pr\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.727328 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8w4k\" (UniqueName: \"kubernetes.io/projected/95d67b6f-d50a-49c6-b866-9926f4b9e495-kube-api-access-c8w4k\") pod \"test-operator-controller-manager-6c866cfdcb-dzjnd\" (UID: \"95d67b6f-d50a-49c6-b866-9926f4b9e495\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.779746 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.791643 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.791737 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcgd\" (UniqueName: \"kubernetes.io/projected/fb31c907-60af-4a8c-a49f-977f28a18e20-kube-api-access-hwcgd\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.791772 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr949\" (UniqueName: \"kubernetes.io/projected/5830ae86-6c11-4567-8f4a-28d4e3251c07-kube-api-access-mr949\") pod \"watcher-operator-controller-manager-9dbdf6486-whzx7\" (UID: \"5830ae86-6c11-4567-8f4a-28d4e3251c07\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.791855 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.791852 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.792227 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:08.292207493 +0000 UTC m=+843.248409962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.792655 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: E0105 21:48:07.792692 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:08.292680347 +0000 UTC m=+843.248882816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.807431 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.815875 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwcgd\" (UniqueName: \"kubernetes.io/projected/fb31c907-60af-4a8c-a49f-977f28a18e20-kube-api-access-hwcgd\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.820053 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr949\" (UniqueName: \"kubernetes.io/projected/5830ae86-6c11-4567-8f4a-28d4e3251c07-kube-api-access-mr949\") pod \"watcher-operator-controller-manager-9dbdf6486-whzx7\" (UID: \"5830ae86-6c11-4567-8f4a-28d4e3251c07\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.820702 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.826704 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp"] Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.846844 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.886042 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.893819 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867rf\" (UniqueName: \"kubernetes.io/projected/6e1e7b73-65c0-40db-964f-93e2d81d1004-kube-api-access-867rf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fv4wf\" (UID: \"6e1e7b73-65c0-40db-964f-93e2d81d1004\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.994641 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-867rf\" (UniqueName: \"kubernetes.io/projected/6e1e7b73-65c0-40db-964f-93e2d81d1004-kube-api-access-867rf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fv4wf\" (UID: \"6e1e7b73-65c0-40db-964f-93e2d81d1004\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" Jan 05 21:48:07 crc kubenswrapper[5000]: I0105 21:48:07.999042 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.005972 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.011130 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.016706 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-867rf\" (UniqueName: \"kubernetes.io/projected/6e1e7b73-65c0-40db-964f-93e2d81d1004-kube-api-access-867rf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fv4wf\" (UID: \"6e1e7b73-65c0-40db-964f-93e2d81d1004\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.017230 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.018564 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f4bfce_86d7_4e99_984f_2a834fda3018.slice/crio-d768d70047bb110516ac84094a8c6cc635e5e205ad3af9a82cbdf5c2ce102dab WatchSource:0}: Error finding container d768d70047bb110516ac84094a8c6cc635e5e205ad3af9a82cbdf5c2ce102dab: Status 404 returned error can't find the container with id d768d70047bb110516ac84094a8c6cc635e5e205ad3af9a82cbdf5c2ce102dab Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.020258 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97262ac6_99c3_47d4_a2a4_401e945a53c7.slice/crio-78ee41c926454caa9d0a67da711c2a023de6383a9cac8cacb0ebc3a6954dc0d9 WatchSource:0}: Error finding container 78ee41c926454caa9d0a67da711c2a023de6383a9cac8cacb0ebc3a6954dc0d9: Status 404 returned error can't find the container with id 78ee41c926454caa9d0a67da711c2a023de6383a9cac8cacb0ebc3a6954dc0d9 Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.021050 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc246b6eb_3f29_404c_8b9c_f96bfc9ac87d.slice/crio-cf96f313b08f7818ad8472743b58fa017d8650849e79268faa18215b8c2411b7 WatchSource:0}: Error finding container cf96f313b08f7818ad8472743b58fa017d8650849e79268faa18215b8c2411b7: Status 404 returned error can't find the container with id cf96f313b08f7818ad8472743b58fa017d8650849e79268faa18215b8c2411b7 Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.033303 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.159955 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.165476 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.173983 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.176354 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe4fd66d_9294_437e_b21e_c66cf323999e.slice/crio-0e991a3267853161449eee4108fca05dd91eec8830b920f8e443cc0bcb07af1c WatchSource:0}: Error finding container 0e991a3267853161449eee4108fca05dd91eec8830b920f8e443cc0bcb07af1c: Status 404 returned error can't find the container with id 0e991a3267853161449eee4108fca05dd91eec8830b920f8e443cc0bcb07af1c Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.178664 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.197219 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.197360 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.197407 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:09.197394017 +0000 UTC m=+844.153596476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.298988 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.299083 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.299116 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.299184 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:09.299165828 +0000 UTC m=+844.255368297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.299249 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.299332 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:09.299314073 +0000 UTC m=+844.255516542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.360375 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd60727e4_58b9_43ed_ae99_0c44cab79dc9.slice/crio-89634aebeb57a334e4fba82b28f13e4e87aad954db9cef98a3995c7766add0e5 WatchSource:0}: Error finding container 89634aebeb57a334e4fba82b28f13e4e87aad954db9cef98a3995c7766add0e5: Status 404 returned error can't find the container with id 89634aebeb57a334e4fba82b28f13e4e87aad954db9cef98a3995c7766add0e5 Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.365678 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd739e2a_b4fb_43cb_bbc5_50b44e18bcfd.slice/crio-4292a8b5162e0481b5a39b6390ba87244e44b4424e10d44af9f8634b9290b2b0 WatchSource:0}: Error finding container 4292a8b5162e0481b5a39b6390ba87244e44b4424e10d44af9f8634b9290b2b0: Status 404 returned error can't find the container with id 4292a8b5162e0481b5a39b6390ba87244e44b4424e10d44af9f8634b9290b2b0 Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.367383 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.371405 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.376148 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.377574 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode376cad9_0c9e_423a_a1fb_b33246417cbb.slice/crio-3cce7d93410fb5d7a66ede3de77c6a156c0e74d4b0b23bc3a46eed00184fd1d0 WatchSource:0}: Error finding container 3cce7d93410fb5d7a66ede3de77c6a156c0e74d4b0b23bc3a46eed00184fd1d0: Status 404 returned error can't find the container with id 3cce7d93410fb5d7a66ede3de77c6a156c0e74d4b0b23bc3a46eed00184fd1d0 Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.381753 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g"] Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.381830 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wf6nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-xrl9g_openstack-operators(e376cad9-0c9e-423a-a1fb-b33246417cbb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.383199 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" podUID="e376cad9-0c9e-423a-a1fb-b33246417cbb" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.496899 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.508926 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh"] Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.510814 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.511245 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1236464f_4580_4f31_ab8b_a22d559aa8c3.slice/crio-5278be795e511c0e95162ab9d5676254f27383952ff50b2bac2b5891c96fd9ab WatchSource:0}: Error finding container 5278be795e511c0e95162ab9d5676254f27383952ff50b2bac2b5891c96fd9ab: Status 404 returned error can't find the container with id 5278be795e511c0e95162ab9d5676254f27383952ff50b2bac2b5891c96fd9ab Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.515184 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8023f1_b9cf_4fa2_b421_b053941d4c42.slice/crio-7fe57e956986d1eba13c324d8027fc60a415abb02c8a2379ceed32747f9161e2 WatchSource:0}: Error finding container 7fe57e956986d1eba13c324d8027fc60a415abb02c8a2379ceed32747f9161e2: Status 404 returned error can't find the container with id 7fe57e956986d1eba13c324d8027fc60a415abb02c8a2379ceed32747f9161e2 Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.515626 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd"] Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.519391 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cljm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-pk7nh_openstack-operators(2a8023f1-b9cf-4fa2-b421-b053941d4c42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.520605 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" podUID="2a8023f1-b9cf-4fa2-b421-b053941d4c42" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.520902 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.527419 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d67b6f_d50a_49c6_b866_9926f4b9e495.slice/crio-f403e71c74da0eaab3ac16409f14e22af93be771e45c99c01244bad538500e58 WatchSource:0}: Error finding container f403e71c74da0eaab3ac16409f14e22af93be771e45c99c01244bad538500e58: Status 404 returned error can't find the container with id f403e71c74da0eaab3ac16409f14e22af93be771e45c99c01244bad538500e58 Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.531167 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42922f7b_4e7e_4ef1_b465_936097b98929.slice/crio-f658ed1b2181e592d00cec8d7c710229285e0ab4ea04f903cb3b3032f574a49c WatchSource:0}: Error finding container f658ed1b2181e592d00cec8d7c710229285e0ab4ea04f903cb3b3032f574a49c: Status 404 returned error can't find the container with id f658ed1b2181e592d00cec8d7c710229285e0ab4ea04f903cb3b3032f574a49c Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.531419 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8w4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-dzjnd_openstack-operators(95d67b6f-d50a-49c6-b866-9926f4b9e495): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.532803 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" podUID="95d67b6f-d50a-49c6-b866-9926f4b9e495" Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.534264 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dab6b1b_c641_4e22_a689_a1dc62da7733.slice/crio-89fb6614684d16ab215c7f5c0d12328cd6f79f0c099d53f9470c58042a82a9c5 WatchSource:0}: Error finding container 89fb6614684d16ab215c7f5c0d12328cd6f79f0c099d53f9470c58042a82a9c5: Status 404 returned error can't find the container with id 89fb6614684d16ab215c7f5c0d12328cd6f79f0c099d53f9470c58042a82a9c5 Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.536280 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlngk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-lh5t8_openstack-operators(42922f7b-4e7e-4ef1-b465-936097b98929): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.536977 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pqbvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-9b6f8f78c-v6nfh_openstack-operators(7dab6b1b-c641-4e22-a689-a1dc62da7733): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.537361 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" podUID="42922f7b-4e7e-4ef1-b465-936097b98929" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.539655 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" podUID="7dab6b1b-c641-4e22-a689-a1dc62da7733" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.603928 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.604334 5000 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.604419 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert podName:87ca26ac-b882-4e9a-8f90-27461a61453e nodeName:}" failed. No retries permitted until 2026-01-05 21:48:10.604400112 +0000 UTC m=+845.560602571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert") pod "infra-operator-controller-manager-6d99759cf-n9mxh" (UID: "87ca26ac-b882-4e9a-8f90-27461a61453e") : secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.622614 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.626139 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5830ae86_6c11_4567_8f4a_28d4e3251c07.slice/crio-27993b28c53002f24c1152c8d07110363e13d574406b0251147041ee77e3284f WatchSource:0}: Error finding container 27993b28c53002f24c1152c8d07110363e13d574406b0251147041ee77e3284f: Status 404 returned error can't find the container with id 27993b28c53002f24c1152c8d07110363e13d574406b0251147041ee77e3284f Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.628797 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mr949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-whzx7_openstack-operators(5830ae86-6c11-4567-8f4a-28d4e3251c07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.629970 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" podUID="5830ae86-6c11-4567-8f4a-28d4e3251c07" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.645316 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf"] Jan 05 21:48:08 crc kubenswrapper[5000]: W0105 21:48:08.654274 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e1e7b73_65c0_40db_964f_93e2d81d1004.slice/crio-bf55c4686b8f2d2ded00d71ab111014bdb283189851dcdd98572b7503f83f75f WatchSource:0}: Error finding container bf55c4686b8f2d2ded00d71ab111014bdb283189851dcdd98572b7503f83f75f: Status 404 returned error can't find the container with id bf55c4686b8f2d2ded00d71ab111014bdb283189851dcdd98572b7503f83f75f Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.786839 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" event={"ID":"6e1e7b73-65c0-40db-964f-93e2d81d1004","Type":"ContainerStarted","Data":"bf55c4686b8f2d2ded00d71ab111014bdb283189851dcdd98572b7503f83f75f"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.790691 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" event={"ID":"e376cad9-0c9e-423a-a1fb-b33246417cbb","Type":"ContainerStarted","Data":"3cce7d93410fb5d7a66ede3de77c6a156c0e74d4b0b23bc3a46eed00184fd1d0"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.792554 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" podUID="e376cad9-0c9e-423a-a1fb-b33246417cbb" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.794068 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" event={"ID":"1236464f-4580-4f31-ab8b-a22d559aa8c3","Type":"ContainerStarted","Data":"5278be795e511c0e95162ab9d5676254f27383952ff50b2bac2b5891c96fd9ab"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.795783 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" event={"ID":"7750c973-b8d1-47f3-90ed-1034a7e6c33c","Type":"ContainerStarted","Data":"0a24eb4eb2fb2e565d8df9fbcb979b45937c0c4f1ca5c5cfd1c0508bbac317c8"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.801034 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" event={"ID":"c246b6eb-3f29-404c-8b9c-f96bfc9ac87d","Type":"ContainerStarted","Data":"cf96f313b08f7818ad8472743b58fa017d8650849e79268faa18215b8c2411b7"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.806988 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" event={"ID":"2a8023f1-b9cf-4fa2-b421-b053941d4c42","Type":"ContainerStarted","Data":"7fe57e956986d1eba13c324d8027fc60a415abb02c8a2379ceed32747f9161e2"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.808984 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" podUID="2a8023f1-b9cf-4fa2-b421-b053941d4c42" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.809617 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" event={"ID":"a5f4bfce-86d7-4e99-984f-2a834fda3018","Type":"ContainerStarted","Data":"d768d70047bb110516ac84094a8c6cc635e5e205ad3af9a82cbdf5c2ce102dab"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.813225 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" event={"ID":"fe4fd66d-9294-437e-b21e-c66cf323999e","Type":"ContainerStarted","Data":"0e991a3267853161449eee4108fca05dd91eec8830b920f8e443cc0bcb07af1c"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.814265 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" event={"ID":"2d94d179-bc23-416d-b4c7-6925b43d7131","Type":"ContainerStarted","Data":"b419dc5992718ef4cb3b9734d061c36831c4e1702e7de14e0f721ad53387b7f2"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.815341 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" event={"ID":"3b7bc759-79ec-4375-848d-a4900428e360","Type":"ContainerStarted","Data":"8bc507ed8e6a16d7809cb534bd59bcac407f740a8be46df4149807b97b02825b"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.816682 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" event={"ID":"450de243-6d71-4f61-836a-47028669d2b7","Type":"ContainerStarted","Data":"968581157b00d324c80fec42298edb22612841da091614a77eb15dc2bf27ccbd"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.821246 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" event={"ID":"42922f7b-4e7e-4ef1-b465-936097b98929","Type":"ContainerStarted","Data":"f658ed1b2181e592d00cec8d7c710229285e0ab4ea04f903cb3b3032f574a49c"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.822750 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" podUID="42922f7b-4e7e-4ef1-b465-936097b98929" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.822956 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" event={"ID":"5830ae86-6c11-4567-8f4a-28d4e3251c07","Type":"ContainerStarted","Data":"27993b28c53002f24c1152c8d07110363e13d574406b0251147041ee77e3284f"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.825932 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" event={"ID":"bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd","Type":"ContainerStarted","Data":"4292a8b5162e0481b5a39b6390ba87244e44b4424e10d44af9f8634b9290b2b0"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.833468 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" event={"ID":"d60727e4-58b9-43ed-ae99-0c44cab79dc9","Type":"ContainerStarted","Data":"89634aebeb57a334e4fba82b28f13e4e87aad954db9cef98a3995c7766add0e5"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.836685 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" podUID="5830ae86-6c11-4567-8f4a-28d4e3251c07" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.837561 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" event={"ID":"a457b96c-32bc-4fbc-80e2-3567e1fdead4","Type":"ContainerStarted","Data":"c3278d6773fef07d721684ad384a7eec86916b1df850117d307bc4cdfd8460f5"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.841372 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" event={"ID":"7dab6b1b-c641-4e22-a689-a1dc62da7733","Type":"ContainerStarted","Data":"89fb6614684d16ab215c7f5c0d12328cd6f79f0c099d53f9470c58042a82a9c5"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.843473 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" podUID="7dab6b1b-c641-4e22-a689-a1dc62da7733" Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.845867 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" event={"ID":"bb2dd57d-6d64-4048-b69b-749250d948b9","Type":"ContainerStarted","Data":"cdf679247d8e29352f5d0fc2a0449550edd6279fb728ae5bf903dc5b5b2ad0b6"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.846813 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" event={"ID":"97262ac6-99c3-47d4-a2a4-401e945a53c7","Type":"ContainerStarted","Data":"78ee41c926454caa9d0a67da711c2a023de6383a9cac8cacb0ebc3a6954dc0d9"} Jan 05 21:48:08 crc kubenswrapper[5000]: I0105 21:48:08.847718 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" event={"ID":"95d67b6f-d50a-49c6-b866-9926f4b9e495","Type":"ContainerStarted","Data":"f403e71c74da0eaab3ac16409f14e22af93be771e45c99c01244bad538500e58"} Jan 05 21:48:08 crc kubenswrapper[5000]: E0105 21:48:08.849791 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" podUID="95d67b6f-d50a-49c6-b866-9926f4b9e495" Jan 05 21:48:09 crc kubenswrapper[5000]: I0105 21:48:09.212136 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.212304 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.212537 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:11.212519511 +0000 UTC m=+846.168721980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: I0105 21:48:09.314058 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:09 crc kubenswrapper[5000]: I0105 21:48:09.314144 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.314435 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.314490 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:11.314472388 +0000 UTC m=+846.270674857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.314713 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.314797 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:11.314779887 +0000 UTC m=+846.270982356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.864454 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:1b684c4ca525a279deee45980140d895e264526c5c7e0a6981d6fae6cbcaa420\\\"\"" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" podUID="7dab6b1b-c641-4e22-a689-a1dc62da7733" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.864515 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" podUID="42922f7b-4e7e-4ef1-b465-936097b98929" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.864612 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" podUID="5830ae86-6c11-4567-8f4a-28d4e3251c07" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.864664 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" podUID="2a8023f1-b9cf-4fa2-b421-b053941d4c42" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.864734 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" podUID="e376cad9-0c9e-423a-a1fb-b33246417cbb" Jan 05 21:48:09 crc kubenswrapper[5000]: E0105 21:48:09.887056 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" podUID="95d67b6f-d50a-49c6-b866-9926f4b9e495" Jan 05 21:48:10 crc kubenswrapper[5000]: I0105 21:48:10.634466 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:10 crc kubenswrapper[5000]: E0105 21:48:10.634748 5000 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:10 crc kubenswrapper[5000]: E0105 21:48:10.635035 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert podName:87ca26ac-b882-4e9a-8f90-27461a61453e nodeName:}" failed. No retries permitted until 2026-01-05 21:48:14.63500823 +0000 UTC m=+849.591210699 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert") pod "infra-operator-controller-manager-6d99759cf-n9mxh" (UID: "87ca26ac-b882-4e9a-8f90-27461a61453e") : secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: I0105 21:48:11.242633 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.242885 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.243037 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:15.242996965 +0000 UTC m=+850.199199474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: I0105 21:48:11.344190 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:11 crc kubenswrapper[5000]: I0105 21:48:11.344304 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.344385 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.344395 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.344454 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:15.344434377 +0000 UTC m=+850.300636846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:11 crc kubenswrapper[5000]: E0105 21:48:11.344467 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:15.344462198 +0000 UTC m=+850.300664667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:14 crc kubenswrapper[5000]: I0105 21:48:14.689361 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:14 crc kubenswrapper[5000]: E0105 21:48:14.689591 5000 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:14 crc kubenswrapper[5000]: E0105 21:48:14.689832 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert podName:87ca26ac-b882-4e9a-8f90-27461a61453e nodeName:}" failed. No retries permitted until 2026-01-05 21:48:22.689810373 +0000 UTC m=+857.646012852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert") pod "infra-operator-controller-manager-6d99759cf-n9mxh" (UID: "87ca26ac-b882-4e9a-8f90-27461a61453e") : secret "infra-operator-webhook-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: I0105 21:48:15.297366 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.297543 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.297616 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:23.297598383 +0000 UTC m=+858.253800852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: I0105 21:48:15.399345 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:15 crc kubenswrapper[5000]: I0105 21:48:15.399433 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.399622 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.399657 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.399758 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:23.399707474 +0000 UTC m=+858.355910003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:15 crc kubenswrapper[5000]: E0105 21:48:15.399781 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:23.399773136 +0000 UTC m=+858.355975685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.926600 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" event={"ID":"2d94d179-bc23-416d-b4c7-6925b43d7131","Type":"ContainerStarted","Data":"230f0a97ac00dff8b063701e32f976a088b5714e8966bc09a9e28955e17c1154"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.927048 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.927535 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" event={"ID":"bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd","Type":"ContainerStarted","Data":"de7b9870fcdc775b369cea02474a1f859695f6e12370283d06cf7c844a1432f9"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.928118 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.928994 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" event={"ID":"7750c973-b8d1-47f3-90ed-1034a7e6c33c","Type":"ContainerStarted","Data":"d59cfb4b5c9e56644a52282edc59feb6a74be66bb048c789a1651770730be472"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.929124 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.929928 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" event={"ID":"450de243-6d71-4f61-836a-47028669d2b7","Type":"ContainerStarted","Data":"9a9f91b9da4004780b52dbc388639193971fd81c65f81b260109a853a43058fe"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.930014 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.931028 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" event={"ID":"6e1e7b73-65c0-40db-964f-93e2d81d1004","Type":"ContainerStarted","Data":"c40a7a97296c3d7ae07dde3686a4bcd60c0be383b5088bcf72a17346ce897569"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.932313 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" event={"ID":"a5f4bfce-86d7-4e99-984f-2a834fda3018","Type":"ContainerStarted","Data":"f36eaf62258c9f25f41111e21724fdcfb9971f5dc8f6b8534c21bed6a0bd273e"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.932829 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.933933 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" event={"ID":"a457b96c-32bc-4fbc-80e2-3567e1fdead4","Type":"ContainerStarted","Data":"ad1b8106e1f70672d022208fd62c9a6f994f27b59c98749127226338d6cdebc3"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.934583 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.935634 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" event={"ID":"1236464f-4580-4f31-ab8b-a22d559aa8c3","Type":"ContainerStarted","Data":"17e18b53b0f7467f7b39d321c2ad9d62232e2e4ef30b085aee4ded2c5dbd6ff2"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.936145 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.937169 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" event={"ID":"c246b6eb-3f29-404c-8b9c-f96bfc9ac87d","Type":"ContainerStarted","Data":"424b73069972f4810d817d55340f91785f101eb4f4c980b7ef066f425f33d280"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.937848 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.938953 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" event={"ID":"3b7bc759-79ec-4375-848d-a4900428e360","Type":"ContainerStarted","Data":"420f5621221bd37e2a967b0835093a29e07aac136f4eb85f76061c99122526e0"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.939418 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.944296 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" event={"ID":"d60727e4-58b9-43ed-ae99-0c44cab79dc9","Type":"ContainerStarted","Data":"55a711c45c492d8021c46dab96c2e8b5856f00f5041d59405fbc5f3ace55ce79"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.945302 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.947109 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" event={"ID":"fe4fd66d-9294-437e-b21e-c66cf323999e","Type":"ContainerStarted","Data":"23d4daf12c38d2eb95d66d5614fa5c798e470b318442207d2865e21f90a91bf2"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.947708 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.949292 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" event={"ID":"bb2dd57d-6d64-4048-b69b-749250d948b9","Type":"ContainerStarted","Data":"283427e5fe08ef0b62a5b0704c80d0b1afa4e5f9908585f87b30b4ad4c08f506"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.950006 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.951329 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" event={"ID":"97262ac6-99c3-47d4-a2a4-401e945a53c7","Type":"ContainerStarted","Data":"6bf16e3518b9f6cef6ef3d16e71bb7868cd238449847ed8dba9a12f98d55db6c"} Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.951516 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:19 crc kubenswrapper[5000]: I0105 21:48:19.984480 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" podStartSLOduration=3.165889091 podStartE2EDuration="13.984463158s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.049418867 +0000 UTC m=+843.005621336" lastFinishedPulling="2026-01-05 21:48:18.867992934 +0000 UTC m=+853.824195403" observedRunningTime="2026-01-05 21:48:19.975619746 +0000 UTC m=+854.931822215" watchObservedRunningTime="2026-01-05 21:48:19.984463158 +0000 UTC m=+854.940665627" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.081906 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" podStartSLOduration=3.218491801 podStartE2EDuration="14.081873316s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.029705585 +0000 UTC m=+842.985908054" lastFinishedPulling="2026-01-05 21:48:18.8930871 +0000 UTC m=+853.849289569" observedRunningTime="2026-01-05 21:48:20.081174876 +0000 UTC m=+855.037377345" watchObservedRunningTime="2026-01-05 21:48:20.081873316 +0000 UTC m=+855.038075785" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.086825 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" podStartSLOduration=3.395271691 podStartE2EDuration="14.086810756s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.180881436 +0000 UTC m=+843.137083905" lastFinishedPulling="2026-01-05 21:48:18.872420491 +0000 UTC m=+853.828622970" observedRunningTime="2026-01-05 21:48:20.038411666 +0000 UTC m=+854.994614135" watchObservedRunningTime="2026-01-05 21:48:20.086810756 +0000 UTC m=+855.043013225" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.114290 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" podStartSLOduration=3.292343788 podStartE2EDuration="14.11427461s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.049778948 +0000 UTC m=+843.005981417" lastFinishedPulling="2026-01-05 21:48:18.87170977 +0000 UTC m=+853.827912239" observedRunningTime="2026-01-05 21:48:20.11360827 +0000 UTC m=+855.069810740" watchObservedRunningTime="2026-01-05 21:48:20.11427461 +0000 UTC m=+855.070477069" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.173131 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" podStartSLOduration=3.388180789 podStartE2EDuration="14.173114577s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.181615307 +0000 UTC m=+843.137817766" lastFinishedPulling="2026-01-05 21:48:18.966549085 +0000 UTC m=+853.922751554" observedRunningTime="2026-01-05 21:48:20.142048612 +0000 UTC m=+855.098251081" watchObservedRunningTime="2026-01-05 21:48:20.173114577 +0000 UTC m=+855.129317046" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.193646 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" podStartSLOduration=3.6948392930000002 podStartE2EDuration="14.193631432s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.372947982 +0000 UTC m=+843.329150451" lastFinishedPulling="2026-01-05 21:48:18.871740121 +0000 UTC m=+853.827942590" observedRunningTime="2026-01-05 21:48:20.189989688 +0000 UTC m=+855.146192157" watchObservedRunningTime="2026-01-05 21:48:20.193631432 +0000 UTC m=+855.149833901" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.195135 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" podStartSLOduration=2.842280897 podStartE2EDuration="13.195127645s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.515080535 +0000 UTC m=+843.471283004" lastFinishedPulling="2026-01-05 21:48:18.867927283 +0000 UTC m=+853.824129752" observedRunningTime="2026-01-05 21:48:20.175079653 +0000 UTC m=+855.131282112" watchObservedRunningTime="2026-01-05 21:48:20.195127645 +0000 UTC m=+855.151330114" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.210923 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" podStartSLOduration=3.188014392 podStartE2EDuration="14.210873694s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:07.849559009 +0000 UTC m=+842.805761478" lastFinishedPulling="2026-01-05 21:48:18.872418311 +0000 UTC m=+853.828620780" observedRunningTime="2026-01-05 21:48:20.21003713 +0000 UTC m=+855.166239599" watchObservedRunningTime="2026-01-05 21:48:20.210873694 +0000 UTC m=+855.167076163" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.237063 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" podStartSLOduration=3.508221952 podStartE2EDuration="14.23704648s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.18139054 +0000 UTC m=+843.137593009" lastFinishedPulling="2026-01-05 21:48:18.910215058 +0000 UTC m=+853.866417537" observedRunningTime="2026-01-05 21:48:20.23106729 +0000 UTC m=+855.187269769" watchObservedRunningTime="2026-01-05 21:48:20.23704648 +0000 UTC m=+855.193248949" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.279852 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" podStartSLOduration=3.605253438 podStartE2EDuration="14.279839259s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.183939133 +0000 UTC m=+843.140141602" lastFinishedPulling="2026-01-05 21:48:18.858524954 +0000 UTC m=+853.814727423" observedRunningTime="2026-01-05 21:48:20.257129093 +0000 UTC m=+855.213331562" watchObservedRunningTime="2026-01-05 21:48:20.279839259 +0000 UTC m=+855.236041728" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.284196 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" podStartSLOduration=3.434284833 podStartE2EDuration="14.284185893s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.024425535 +0000 UTC m=+842.980628004" lastFinishedPulling="2026-01-05 21:48:18.874326585 +0000 UTC m=+853.830529064" observedRunningTime="2026-01-05 21:48:20.277225505 +0000 UTC m=+855.233427974" watchObservedRunningTime="2026-01-05 21:48:20.284185893 +0000 UTC m=+855.240388362" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.299154 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" podStartSLOduration=3.80277381 podStartE2EDuration="14.29914052s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.375715351 +0000 UTC m=+843.331917820" lastFinishedPulling="2026-01-05 21:48:18.872082061 +0000 UTC m=+853.828284530" observedRunningTime="2026-01-05 21:48:20.293388146 +0000 UTC m=+855.249590615" watchObservedRunningTime="2026-01-05 21:48:20.29914052 +0000 UTC m=+855.255342989" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.311871 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" podStartSLOduration=3.7760427869999997 podStartE2EDuration="14.311856132s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.361930728 +0000 UTC m=+843.318133197" lastFinishedPulling="2026-01-05 21:48:18.897744073 +0000 UTC m=+853.853946542" observedRunningTime="2026-01-05 21:48:20.309159045 +0000 UTC m=+855.265361514" watchObservedRunningTime="2026-01-05 21:48:20.311856132 +0000 UTC m=+855.268058601" Jan 05 21:48:20 crc kubenswrapper[5000]: I0105 21:48:20.327105 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fv4wf" podStartSLOduration=3.115879927 podStartE2EDuration="13.327091587s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.66047092 +0000 UTC m=+843.616673389" lastFinishedPulling="2026-01-05 21:48:18.87168258 +0000 UTC m=+853.827885049" observedRunningTime="2026-01-05 21:48:20.325775659 +0000 UTC m=+855.281978138" watchObservedRunningTime="2026-01-05 21:48:20.327091587 +0000 UTC m=+855.283294056" Jan 05 21:48:22 crc kubenswrapper[5000]: I0105 21:48:22.714444 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:22 crc kubenswrapper[5000]: I0105 21:48:22.719581 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87ca26ac-b882-4e9a-8f90-27461a61453e-cert\") pod \"infra-operator-controller-manager-6d99759cf-n9mxh\" (UID: \"87ca26ac-b882-4e9a-8f90-27461a61453e\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:22 crc kubenswrapper[5000]: I0105 21:48:22.971356 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" event={"ID":"5830ae86-6c11-4567-8f4a-28d4e3251c07","Type":"ContainerStarted","Data":"95665fa6b100c23e4dcf78f7d3e78ff4710ed9b5f2c0c4a9eec8a0259e684060"} Jan 05 21:48:22 crc kubenswrapper[5000]: I0105 21:48:22.972189 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:23 crc kubenswrapper[5000]: I0105 21:48:23.011439 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:23 crc kubenswrapper[5000]: I0105 21:48:23.324688 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.324922 5000 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.324971 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert podName:f4d8f065-ce54-4bc9-9caf-e6a131e73a35 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:39.324954085 +0000 UTC m=+874.281156564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" (UID: "f4d8f065-ce54-4bc9-9caf-e6a131e73a35") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.426982 5000 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.427080 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:39.427058156 +0000 UTC m=+874.383260625 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "metrics-server-cert" not found Jan 05 21:48:23 crc kubenswrapper[5000]: I0105 21:48:23.426715 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:23 crc kubenswrapper[5000]: I0105 21:48:23.427646 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.428159 5000 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 05 21:48:23 crc kubenswrapper[5000]: E0105 21:48:23.428204 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs podName:fb31c907-60af-4a8c-a49f-977f28a18e20 nodeName:}" failed. No retries permitted until 2026-01-05 21:48:39.428191198 +0000 UTC m=+874.384393657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs") pod "openstack-operator-controller-manager-5cd5f6db77-hgptq" (UID: "fb31c907-60af-4a8c-a49f-977f28a18e20") : secret "webhook-server-cert" not found Jan 05 21:48:26 crc kubenswrapper[5000]: I0105 21:48:26.040609 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" podStartSLOduration=5.45384449 podStartE2EDuration="19.040594924s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.628671144 +0000 UTC m=+843.584873613" lastFinishedPulling="2026-01-05 21:48:22.215421578 +0000 UTC m=+857.171624047" observedRunningTime="2026-01-05 21:48:22.991558718 +0000 UTC m=+857.947761197" watchObservedRunningTime="2026-01-05 21:48:26.040594924 +0000 UTC m=+860.996797393" Jan 05 21:48:26 crc kubenswrapper[5000]: I0105 21:48:26.044133 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh"] Jan 05 21:48:26 crc kubenswrapper[5000]: I0105 21:48:26.957300 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-mcqdp" Jan 05 21:48:26 crc kubenswrapper[5000]: I0105 21:48:26.965676 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-p6wws" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:26.999294 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-jsbjc" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.024604 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" event={"ID":"95d67b6f-d50a-49c6-b866-9926f4b9e495","Type":"ContainerStarted","Data":"5361137c9756134c1410b4a42def722de41ddb3fc53612103eb64c38974232bc"} Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.024950 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.025798 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-2rhpx" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.029016 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" event={"ID":"87ca26ac-b882-4e9a-8f90-27461a61453e","Type":"ContainerStarted","Data":"493715a6989f09b46edd52d16dae2e27418499d26da6c4c3ea92ec9fd764458b"} Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.052931 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" podStartSLOduration=2.304910256 podStartE2EDuration="20.052912119s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.531289227 +0000 UTC m=+843.487491696" lastFinishedPulling="2026-01-05 21:48:26.27929109 +0000 UTC m=+861.235493559" observedRunningTime="2026-01-05 21:48:27.04840289 +0000 UTC m=+862.004605359" watchObservedRunningTime="2026-01-05 21:48:27.052912119 +0000 UTC m=+862.009114588" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.062680 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-2q8d7" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.078249 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-rcwpw" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.269794 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-m8qfg" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.273482 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-h7j5w" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.363165 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-zg96g" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.457277 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-9smz4" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.568810 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-ghw2z" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.598169 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-h5tz2" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.824629 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-9cd8n" Jan 05 21:48:27 crc kubenswrapper[5000]: I0105 21:48:27.888525 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-whzx7" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.033523 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" event={"ID":"2a8023f1-b9cf-4fa2-b421-b053941d4c42","Type":"ContainerStarted","Data":"8ef05909bc59fd792233882ff85ec0a36569e87827b5fae246cb20a0c71654ba"} Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.034490 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.037751 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" event={"ID":"42922f7b-4e7e-4ef1-b465-936097b98929","Type":"ContainerStarted","Data":"8f8421c1dd2b6a147de1fcd6f898e492615fb12e1271c07b10c7fd576e3661c8"} Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.038073 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.039491 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" event={"ID":"e376cad9-0c9e-423a-a1fb-b33246417cbb","Type":"ContainerStarted","Data":"3918a680144af32aa31b2506aef153e1c89d739a4fdff9e9c8cc56edf0225a13"} Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.040050 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.041510 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" event={"ID":"7dab6b1b-c641-4e22-a689-a1dc62da7733","Type":"ContainerStarted","Data":"66876a39e61454c62f4ae6bc21fada8a803957d2a29749e86f2e52db9ed209e2"} Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.041985 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.054174 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" podStartSLOduration=2.644072294 podStartE2EDuration="21.054159746s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.519267734 +0000 UTC m=+843.475470203" lastFinishedPulling="2026-01-05 21:48:26.929355186 +0000 UTC m=+861.885557655" observedRunningTime="2026-01-05 21:48:28.046107477 +0000 UTC m=+863.002309946" watchObservedRunningTime="2026-01-05 21:48:28.054159746 +0000 UTC m=+863.010362215" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.077189 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" podStartSLOduration=4.333628396 podStartE2EDuration="22.077172402s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.536147996 +0000 UTC m=+843.492350465" lastFinishedPulling="2026-01-05 21:48:26.279692002 +0000 UTC m=+861.235894471" observedRunningTime="2026-01-05 21:48:28.073721404 +0000 UTC m=+863.029923873" watchObservedRunningTime="2026-01-05 21:48:28.077172402 +0000 UTC m=+863.033374871" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.100169 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" podStartSLOduration=4.356394605 podStartE2EDuration="22.100153448s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.536876096 +0000 UTC m=+843.493078565" lastFinishedPulling="2026-01-05 21:48:26.280634929 +0000 UTC m=+861.236837408" observedRunningTime="2026-01-05 21:48:28.094713902 +0000 UTC m=+863.050916371" watchObservedRunningTime="2026-01-05 21:48:28.100153448 +0000 UTC m=+863.056355917" Jan 05 21:48:28 crc kubenswrapper[5000]: I0105 21:48:28.127285 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" podStartSLOduration=3.630494427 podStartE2EDuration="22.12726136s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:08.381605469 +0000 UTC m=+843.337807938" lastFinishedPulling="2026-01-05 21:48:26.878372402 +0000 UTC m=+861.834574871" observedRunningTime="2026-01-05 21:48:28.108738212 +0000 UTC m=+863.064940681" watchObservedRunningTime="2026-01-05 21:48:28.12726136 +0000 UTC m=+863.083463829" Jan 05 21:48:30 crc kubenswrapper[5000]: I0105 21:48:30.056201 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" event={"ID":"87ca26ac-b882-4e9a-8f90-27461a61453e","Type":"ContainerStarted","Data":"d58683cee87ce3ac8939ba436b0d3af41a020afeb1fe6a716c20292a658520c4"} Jan 05 21:48:30 crc kubenswrapper[5000]: I0105 21:48:30.057078 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:30 crc kubenswrapper[5000]: I0105 21:48:30.071349 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" podStartSLOduration=20.490250075 podStartE2EDuration="24.071332482s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:26.285777055 +0000 UTC m=+861.241979524" lastFinishedPulling="2026-01-05 21:48:29.866859462 +0000 UTC m=+864.823061931" observedRunningTime="2026-01-05 21:48:30.069289453 +0000 UTC m=+865.025491942" watchObservedRunningTime="2026-01-05 21:48:30.071332482 +0000 UTC m=+865.027534951" Jan 05 21:48:37 crc kubenswrapper[5000]: I0105 21:48:37.634086 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-xrl9g" Jan 05 21:48:37 crc kubenswrapper[5000]: I0105 21:48:37.728513 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-lh5t8" Jan 05 21:48:37 crc kubenswrapper[5000]: I0105 21:48:37.782982 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-v6nfh" Jan 05 21:48:37 crc kubenswrapper[5000]: I0105 21:48:37.813278 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-pk7nh" Jan 05 21:48:37 crc kubenswrapper[5000]: I0105 21:48:37.848842 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dzjnd" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.390107 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.398515 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f4d8f065-ce54-4bc9-9caf-e6a131e73a35-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h\" (UID: \"f4d8f065-ce54-4bc9-9caf-e6a131e73a35\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.492162 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.492221 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.497190 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-metrics-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.498208 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fb31c907-60af-4a8c-a49f-977f28a18e20-webhook-certs\") pod \"openstack-operator-controller-manager-5cd5f6db77-hgptq\" (UID: \"fb31c907-60af-4a8c-a49f-977f28a18e20\") " pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.555152 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.761163 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:39 crc kubenswrapper[5000]: I0105 21:48:39.982636 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h"] Jan 05 21:48:39 crc kubenswrapper[5000]: W0105 21:48:39.990474 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4d8f065_ce54_4bc9_9caf_e6a131e73a35.slice/crio-090ce305a15eeb434881203e0aed8695c2e3d40587ad149e9e2236eaffebcb42 WatchSource:0}: Error finding container 090ce305a15eeb434881203e0aed8695c2e3d40587ad149e9e2236eaffebcb42: Status 404 returned error can't find the container with id 090ce305a15eeb434881203e0aed8695c2e3d40587ad149e9e2236eaffebcb42 Jan 05 21:48:40 crc kubenswrapper[5000]: I0105 21:48:40.120817 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" event={"ID":"f4d8f065-ce54-4bc9-9caf-e6a131e73a35","Type":"ContainerStarted","Data":"090ce305a15eeb434881203e0aed8695c2e3d40587ad149e9e2236eaffebcb42"} Jan 05 21:48:40 crc kubenswrapper[5000]: W0105 21:48:40.269933 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb31c907_60af_4a8c_a49f_977f28a18e20.slice/crio-02500708c222c56da0d5de520de532ec7d875a67bf49304c9937ff80df66f9bc WatchSource:0}: Error finding container 02500708c222c56da0d5de520de532ec7d875a67bf49304c9937ff80df66f9bc: Status 404 returned error can't find the container with id 02500708c222c56da0d5de520de532ec7d875a67bf49304c9937ff80df66f9bc Jan 05 21:48:40 crc kubenswrapper[5000]: I0105 21:48:40.273027 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq"] Jan 05 21:48:41 crc kubenswrapper[5000]: I0105 21:48:41.149776 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" event={"ID":"fb31c907-60af-4a8c-a49f-977f28a18e20","Type":"ContainerStarted","Data":"02500708c222c56da0d5de520de532ec7d875a67bf49304c9937ff80df66f9bc"} Jan 05 21:48:43 crc kubenswrapper[5000]: I0105 21:48:43.017881 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-n9mxh" Jan 05 21:48:44 crc kubenswrapper[5000]: I0105 21:48:44.168271 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" event={"ID":"fb31c907-60af-4a8c-a49f-977f28a18e20","Type":"ContainerStarted","Data":"70e92e68daf4b572383b49444239516a0cfd9e7155e6b373fb4590beb70fa14b"} Jan 05 21:48:44 crc kubenswrapper[5000]: I0105 21:48:44.168419 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:44 crc kubenswrapper[5000]: I0105 21:48:44.194220 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" podStartSLOduration=37.194206903 podStartE2EDuration="37.194206903s" podCreationTimestamp="2026-01-05 21:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:48:44.18813287 +0000 UTC m=+879.144335339" watchObservedRunningTime="2026-01-05 21:48:44.194206903 +0000 UTC m=+879.150409372" Jan 05 21:48:47 crc kubenswrapper[5000]: I0105 21:48:47.191745 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" event={"ID":"f4d8f065-ce54-4bc9-9caf-e6a131e73a35","Type":"ContainerStarted","Data":"efeee87eb7caae848dcd2f334c6ceff6dc4107cc7c8fc3f3693a31e9b679723b"} Jan 05 21:48:47 crc kubenswrapper[5000]: I0105 21:48:47.192264 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:48:47 crc kubenswrapper[5000]: I0105 21:48:47.221948 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" podStartSLOduration=34.811149314 podStartE2EDuration="41.221930212s" podCreationTimestamp="2026-01-05 21:48:06 +0000 UTC" firstStartedPulling="2026-01-05 21:48:39.994688323 +0000 UTC m=+874.950890802" lastFinishedPulling="2026-01-05 21:48:46.405469221 +0000 UTC m=+881.361671700" observedRunningTime="2026-01-05 21:48:47.21521425 +0000 UTC m=+882.171416719" watchObservedRunningTime="2026-01-05 21:48:47.221930212 +0000 UTC m=+882.178132681" Jan 05 21:48:49 crc kubenswrapper[5000]: I0105 21:48:49.769735 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cd5f6db77-hgptq" Jan 05 21:48:59 crc kubenswrapper[5000]: I0105 21:48:59.562504 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.455876 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.457944 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.461820 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.462405 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.462773 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-krbhh" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.464346 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.479217 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.490422 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.490489 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qlsv\" (UniqueName: \"kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.550251 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.551716 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.555204 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.592004 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmf8\" (UniqueName: \"kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.592100 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.592140 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qlsv\" (UniqueName: \"kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.592189 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.592223 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.593184 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.616699 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qlsv\" (UniqueName: \"kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv\") pod \"dnsmasq-dns-675f4bcbfc-pktpv\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.640784 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.693065 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.693114 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.693141 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmf8\" (UniqueName: \"kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.694122 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.694187 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.711250 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmf8\" (UniqueName: \"kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8\") pod \"dnsmasq-dns-78dd6ddcc-rjw7g\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.775290 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.869410 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:14 crc kubenswrapper[5000]: I0105 21:49:14.998090 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:15 crc kubenswrapper[5000]: I0105 21:49:15.013200 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:49:15 crc kubenswrapper[5000]: I0105 21:49:15.109847 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:15 crc kubenswrapper[5000]: W0105 21:49:15.112805 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod767cebf8_00c4_4519_b794_816cf5b6fc69.slice/crio-b340262d6cf7afed3b3efff3568bc0f895a6cc046dd257c7ac9b804a3129c30d WatchSource:0}: Error finding container b340262d6cf7afed3b3efff3568bc0f895a6cc046dd257c7ac9b804a3129c30d: Status 404 returned error can't find the container with id b340262d6cf7afed3b3efff3568bc0f895a6cc046dd257c7ac9b804a3129c30d Jan 05 21:49:15 crc kubenswrapper[5000]: I0105 21:49:15.386145 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" event={"ID":"30cfeafb-f53b-45fa-bbf7-ab056686a65d","Type":"ContainerStarted","Data":"2f79b881fe5b7f5e926cff131b03c3e49bcb9dec819ce23a233336c86692499e"} Jan 05 21:49:15 crc kubenswrapper[5000]: I0105 21:49:15.386863 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" event={"ID":"767cebf8-00c4-4519-b794-816cf5b6fc69","Type":"ContainerStarted","Data":"b340262d6cf7afed3b3efff3568bc0f895a6cc046dd257c7ac9b804a3129c30d"} Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.366014 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.386027 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.389525 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.392053 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.534458 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntzs\" (UniqueName: \"kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.534517 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.534548 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.635340 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.635382 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.635480 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntzs\" (UniqueName: \"kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.636546 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.636646 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.684065 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntzs\" (UniqueName: \"kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs\") pod \"dnsmasq-dns-666b6646f7-zn9f4\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.704501 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.713130 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.744969 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.760776 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.760914 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.945275 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.945387 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv6x\" (UniqueName: \"kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:17 crc kubenswrapper[5000]: I0105 21:49:17.945427 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.046646 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv6x\" (UniqueName: \"kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.046720 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.046822 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.048128 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.048688 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.070563 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv6x\" (UniqueName: \"kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x\") pod \"dnsmasq-dns-57d769cc4f-2fxmv\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.083754 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.509593 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.512828 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516078 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516260 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516395 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516508 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516691 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.516970 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pr5wp" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.517072 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.526962 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655182 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655323 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655391 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655421 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655450 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655477 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655511 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655537 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655561 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4fl\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655595 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.655620 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756750 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756804 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756846 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756868 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756910 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756933 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756961 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.756986 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.757000 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4fl\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.757019 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.757035 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758129 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758471 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758539 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758722 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758914 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.758997 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.761269 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.762176 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.770114 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.771318 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.774751 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4fl\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.779043 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.840462 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.852782 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.854242 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.858697 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.858777 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-md6l9" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.858870 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.858966 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.859002 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.859150 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.866185 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.870821 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959481 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959527 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959545 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959575 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959595 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959638 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.959978 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.960031 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.960122 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88w7\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.960148 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:18 crc kubenswrapper[5000]: I0105 21:49:18.960165 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061595 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061667 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061699 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061740 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88w7\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061760 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061777 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061793 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061809 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061824 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061848 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.061873 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.062210 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.062800 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.063759 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.063926 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.064268 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.064645 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.067743 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.068306 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.070406 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.072081 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.082961 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88w7\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.094393 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.182992 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.976972 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.978498 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.985154 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-plqg9" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.985391 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.985638 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.986577 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.987508 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 21:49:19 crc kubenswrapper[5000]: I0105 21:49:19.990571 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075156 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075451 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075498 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075532 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075711 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075756 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075863 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.075942 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprj4\" (UniqueName: \"kubernetes.io/projected/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kube-api-access-dprj4\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.177680 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.177750 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.177788 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.177822 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.177880 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.178013 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.178072 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.178093 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprj4\" (UniqueName: \"kubernetes.io/projected/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kube-api-access-dprj4\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.178331 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.178975 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.179430 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.179491 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.179640 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.182473 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.182510 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.197994 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprj4\" (UniqueName: \"kubernetes.io/projected/eb55e4be-34e2-4649-aa6a-24b2019cc9cf-kube-api-access-dprj4\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.208086 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"eb55e4be-34e2-4649-aa6a-24b2019cc9cf\") " pod="openstack/openstack-galera-0" Jan 05 21:49:20 crc kubenswrapper[5000]: I0105 21:49:20.304537 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.391036 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.393005 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.395089 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.396367 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.396636 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-72qrj" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.397393 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.397824 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.496830 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.496874 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.496913 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.497049 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.497119 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnqwm\" (UniqueName: \"kubernetes.io/projected/43e574d5-969c-40aa-abd6-69f81feef2c5-kube-api-access-gnqwm\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.497318 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.497368 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.497530 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598507 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598550 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598570 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598606 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598625 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnqwm\" (UniqueName: \"kubernetes.io/projected/43e574d5-969c-40aa-abd6-69f81feef2c5-kube-api-access-gnqwm\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598653 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598669 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.598695 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.599076 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.599753 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.599881 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.599989 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.600483 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43e574d5-969c-40aa-abd6-69f81feef2c5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.606270 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.606312 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e574d5-969c-40aa-abd6-69f81feef2c5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.622194 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnqwm\" (UniqueName: \"kubernetes.io/projected/43e574d5-969c-40aa-abd6-69f81feef2c5-kube-api-access-gnqwm\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.626405 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"43e574d5-969c-40aa-abd6-69f81feef2c5\") " pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.719265 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.850384 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.851362 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.857874 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.858841 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-nrf88" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.859040 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 05 21:49:21 crc kubenswrapper[5000]: I0105 21:49:21.864679 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.004070 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.004236 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-config-data\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.004411 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kolla-config\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.004450 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzn9h\" (UniqueName: \"kubernetes.io/projected/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kube-api-access-tzn9h\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.004574 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.106063 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.106116 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-config-data\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.106165 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kolla-config\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.106182 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzn9h\" (UniqueName: \"kubernetes.io/projected/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kube-api-access-tzn9h\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.106215 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.107208 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-config-data\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.107443 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kolla-config\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.115583 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.115661 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.148138 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzn9h\" (UniqueName: \"kubernetes.io/projected/b7b36978-e904-42dc-b2e9-cfd481f5b6f0-kube-api-access-tzn9h\") pod \"memcached-0\" (UID: \"b7b36978-e904-42dc-b2e9-cfd481f5b6f0\") " pod="openstack/memcached-0" Jan 05 21:49:22 crc kubenswrapper[5000]: I0105 21:49:22.213794 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 05 21:49:23 crc kubenswrapper[5000]: I0105 21:49:23.098960 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:49:23 crc kubenswrapper[5000]: I0105 21:49:23.099227 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.176819 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.178489 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.188355 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rk8fx" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.195689 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.251117 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkng\" (UniqueName: \"kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng\") pod \"kube-state-metrics-0\" (UID: \"0be433e4-7178-4637-922d-9d1d455b7f76\") " pod="openstack/kube-state-metrics-0" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.352237 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkng\" (UniqueName: \"kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng\") pod \"kube-state-metrics-0\" (UID: \"0be433e4-7178-4637-922d-9d1d455b7f76\") " pod="openstack/kube-state-metrics-0" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.378736 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkng\" (UniqueName: \"kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng\") pod \"kube-state-metrics-0\" (UID: \"0be433e4-7178-4637-922d-9d1d455b7f76\") " pod="openstack/kube-state-metrics-0" Jan 05 21:49:24 crc kubenswrapper[5000]: I0105 21:49:24.540287 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.280434 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qtwd6"] Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.281709 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.284365 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5wdkn" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.284421 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.284365 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.295608 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6"] Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.417373 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cgdx9"] Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.419159 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.425259 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-scripts\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428026 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-combined-ca-bundle\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428178 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-ovn-controller-tls-certs\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428406 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428562 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-log-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428807 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.428967 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmp2b\" (UniqueName: \"kubernetes.io/projected/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-kube-api-access-wmp2b\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.448284 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cgdx9"] Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.530848 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531131 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmp2b\" (UniqueName: \"kubernetes.io/projected/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-kube-api-access-wmp2b\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531208 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt2f\" (UniqueName: \"kubernetes.io/projected/4e574607-e42c-4140-b43a-379ba76f4e73-kube-api-access-hnt2f\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531282 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-run\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531374 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-etc-ovs\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531460 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-scripts\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531538 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-combined-ca-bundle\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531480 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531675 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-ovn-controller-tls-certs\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531753 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-lib\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531834 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.531937 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-log-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.532021 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e574607-e42c-4140-b43a-379ba76f4e73-scripts\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.532112 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-log\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.532301 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-run-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.532562 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-var-log-ovn\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.533634 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-scripts\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.543785 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-combined-ca-bundle\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.557183 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-ovn-controller-tls-certs\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.570200 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmp2b\" (UniqueName: \"kubernetes.io/projected/30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1-kube-api-access-wmp2b\") pod \"ovn-controller-qtwd6\" (UID: \"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1\") " pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.633856 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e574607-e42c-4140-b43a-379ba76f4e73-scripts\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.633929 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-log\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.633970 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt2f\" (UniqueName: \"kubernetes.io/projected/4e574607-e42c-4140-b43a-379ba76f4e73-kube-api-access-hnt2f\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.633990 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-run\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.634021 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-etc-ovs\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.634082 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-lib\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.636408 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e574607-e42c-4140-b43a-379ba76f4e73-scripts\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.636673 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-log\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.636950 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-etc-ovs\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.637019 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-run\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.637068 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4e574607-e42c-4140-b43a-379ba76f4e73-var-lib\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.644402 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.652877 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt2f\" (UniqueName: \"kubernetes.io/projected/4e574607-e42c-4140-b43a-379ba76f4e73-kube-api-access-hnt2f\") pod \"ovn-controller-ovs-cgdx9\" (UID: \"4e574607-e42c-4140-b43a-379ba76f4e73\") " pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:28 crc kubenswrapper[5000]: I0105 21:49:28.753230 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.118476 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.118656 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvmf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-rjw7g_openstack(767cebf8-00c4-4519-b794-816cf5b6fc69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.119882 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" podUID="767cebf8-00c4-4519-b794-816cf5b6fc69" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.146093 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.146275 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qlsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-pktpv_openstack(30cfeafb-f53b-45fa-bbf7-ab056686a65d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:49:29 crc kubenswrapper[5000]: E0105 21:49:29.147596 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" podUID="30cfeafb-f53b-45fa-bbf7-ab056686a65d" Jan 05 21:49:29 crc kubenswrapper[5000]: I0105 21:49:29.707621 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.004658 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.006263 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.012281 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.019691 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.024395 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.044498 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:30 crc kubenswrapper[5000]: W0105 21:49:30.051021 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f46892_7d0f_4bf9_92c7_2f8fbfdd4ee1.slice/crio-1d8eb798ed906a90b43c2959cd85d7c20533a1be884e4fd672fed6f6aaaed8b3 WatchSource:0}: Error finding container 1d8eb798ed906a90b43c2959cd85d7c20533a1be884e4fd672fed6f6aaaed8b3: Status 404 returned error can't find the container with id 1d8eb798ed906a90b43c2959cd85d7c20533a1be884e4fd672fed6f6aaaed8b3 Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.051241 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.054708 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.055927 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmf8\" (UniqueName: \"kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8\") pod \"767cebf8-00c4-4519-b794-816cf5b6fc69\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.055957 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc\") pod \"767cebf8-00c4-4519-b794-816cf5b6fc69\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.056009 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config\") pod \"767cebf8-00c4-4519-b794-816cf5b6fc69\" (UID: \"767cebf8-00c4-4519-b794-816cf5b6fc69\") " Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.056746 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "767cebf8-00c4-4519-b794-816cf5b6fc69" (UID: "767cebf8-00c4-4519-b794-816cf5b6fc69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.056834 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config" (OuterVolumeSpecName: "config") pod "767cebf8-00c4-4519-b794-816cf5b6fc69" (UID: "767cebf8-00c4-4519-b794-816cf5b6fc69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.057006 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.057020 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/767cebf8-00c4-4519-b794-816cf5b6fc69-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.064738 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.072601 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8" (OuterVolumeSpecName: "kube-api-access-nvmf8") pod "767cebf8-00c4-4519-b794-816cf5b6fc69" (UID: "767cebf8-00c4-4519-b794-816cf5b6fc69"). InnerVolumeSpecName "kube-api-access-nvmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.081267 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:49:30 crc kubenswrapper[5000]: W0105 21:49:30.093005 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be433e4_7178_4637_922d_9d1d455b7f76.slice/crio-bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01 WatchSource:0}: Error finding container bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01: Status 404 returned error can't find the container with id bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01 Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.158124 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config\") pod \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.158201 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qlsv\" (UniqueName: \"kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv\") pod \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\" (UID: \"30cfeafb-f53b-45fa-bbf7-ab056686a65d\") " Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.158621 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmf8\" (UniqueName: \"kubernetes.io/projected/767cebf8-00c4-4519-b794-816cf5b6fc69-kube-api-access-nvmf8\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.158699 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config" (OuterVolumeSpecName: "config") pod "30cfeafb-f53b-45fa-bbf7-ab056686a65d" (UID: "30cfeafb-f53b-45fa-bbf7-ab056686a65d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.164012 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv" (OuterVolumeSpecName: "kube-api-access-4qlsv") pod "30cfeafb-f53b-45fa-bbf7-ab056686a65d" (UID: "30cfeafb-f53b-45fa-bbf7-ab056686a65d"). InnerVolumeSpecName "kube-api-access-4qlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.190195 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cgdx9"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.261796 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cfeafb-f53b-45fa-bbf7-ab056686a65d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.261830 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qlsv\" (UniqueName: \"kubernetes.io/projected/30cfeafb-f53b-45fa-bbf7-ab056686a65d-kube-api-access-4qlsv\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.324667 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-48f9l"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.326297 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.328148 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.328210 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.354590 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-48f9l"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.464522 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f01d9e3-692b-4648-b57f-3fb13e84379a-config\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.464728 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6wf\" (UniqueName: \"kubernetes.io/projected/2f01d9e3-692b-4648-b57f-3fb13e84379a-kube-api-access-zt6wf\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.464874 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovs-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.464947 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-combined-ca-bundle\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.465012 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovn-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.465077 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.516557 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" event={"ID":"f818c898-5db5-41e5-9614-8f58fcaca803","Type":"ContainerStarted","Data":"357a7f87815c922b0e9b4ca410d4e3f5e8da941c7e4814b119b795bb6177dc81"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.518655 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerStarted","Data":"d7e91a0192b03cd77880800527b1731e6385200f0fb40c1ced4c34f8f2204046"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.520626 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" event={"ID":"30cfeafb-f53b-45fa-bbf7-ab056686a65d","Type":"ContainerDied","Data":"2f79b881fe5b7f5e926cff131b03c3e49bcb9dec819ce23a233336c86692499e"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.520718 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-pktpv" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.524161 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" event={"ID":"767cebf8-00c4-4519-b794-816cf5b6fc69","Type":"ContainerDied","Data":"b340262d6cf7afed3b3efff3568bc0f895a6cc046dd257c7ac9b804a3129c30d"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.524225 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rjw7g" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.525955 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6" event={"ID":"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1","Type":"ContainerStarted","Data":"1d8eb798ed906a90b43c2959cd85d7c20533a1be884e4fd672fed6f6aaaed8b3"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.535350 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgdx9" event={"ID":"4e574607-e42c-4140-b43a-379ba76f4e73","Type":"ContainerStarted","Data":"18dd0dbd8da4458da2f8067fc094a595cf14229b2774f58b4d4884fdc8940132"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.537581 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be433e4-7178-4637-922d-9d1d455b7f76","Type":"ContainerStarted","Data":"bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.540010 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb55e4be-34e2-4649-aa6a-24b2019cc9cf","Type":"ContainerStarted","Data":"d66d40ed59149f0308f42ccaab48b2567ab46cbb3bfe9149bf5a08acc7f94dd6"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.541121 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerStarted","Data":"39efdc7ffc1edd538df415d6797be3fc91211d8e1cd8202a568c9c172e45b9d9"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.542428 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" event={"ID":"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d","Type":"ContainerStarted","Data":"1416de5f249982a6aa08c27f9a637777fb93c7985b1d0310742d79b355216d15"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.543496 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43e574d5-969c-40aa-abd6-69f81feef2c5","Type":"ContainerStarted","Data":"1e24bf9b9200ab00d9f4a5c07c8422d0b3b12e5282ef6458ba5bc72c28dd8f78"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.544690 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b7b36978-e904-42dc-b2e9-cfd481f5b6f0","Type":"ContainerStarted","Data":"c6bd66e4537bf0db177dd98c772d90db7b91b2f547e74f34333981700350a7de"} Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.568507 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovs-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.568694 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-combined-ca-bundle\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.568940 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovs-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.581122 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovn-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.581315 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.581409 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f01d9e3-692b-4648-b57f-3fb13e84379a-config\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.581474 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6wf\" (UniqueName: \"kubernetes.io/projected/2f01d9e3-692b-4648-b57f-3fb13e84379a-kube-api-access-zt6wf\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.583241 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f01d9e3-692b-4648-b57f-3fb13e84379a-config\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.583631 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2f01d9e3-692b-4648-b57f-3fb13e84379a-ovn-rundir\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.589133 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.590082 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f01d9e3-692b-4648-b57f-3fb13e84379a-combined-ca-bundle\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.610697 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6wf\" (UniqueName: \"kubernetes.io/projected/2f01d9e3-692b-4648-b57f-3fb13e84379a-kube-api-access-zt6wf\") pod \"ovn-controller-metrics-48f9l\" (UID: \"2f01d9e3-692b-4648-b57f-3fb13e84379a\") " pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.647313 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-48f9l" Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.759231 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.772391 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-pktpv"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.788448 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:30 crc kubenswrapper[5000]: I0105 21:49:30.794541 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rjw7g"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.013816 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.015606 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.021844 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w55d2" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.022410 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.023030 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.023110 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.027984 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.094934 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095018 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095105 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095139 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095196 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095229 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095256 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.095369 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c854d\" (UniqueName: \"kubernetes.io/projected/3e42459b-9f2f-45c6-8a77-6909cc2689a2-kube-api-access-c854d\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196791 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c854d\" (UniqueName: \"kubernetes.io/projected/3e42459b-9f2f-45c6-8a77-6909cc2689a2-kube-api-access-c854d\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196852 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196880 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196940 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196964 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.196995 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.197019 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.197039 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.198447 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.198607 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.198826 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-config\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.199645 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e42459b-9f2f-45c6-8a77-6909cc2689a2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.208237 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.211681 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.213775 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.214397 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.219612 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.219625 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.226823 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c854d\" (UniqueName: \"kubernetes.io/projected/3e42459b-9f2f-45c6-8a77-6909cc2689a2-kube-api-access-c854d\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.227574 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nz965" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.231074 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e42459b-9f2f-45c6-8a77-6909cc2689a2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.231495 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.254987 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.266451 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3e42459b-9f2f-45c6-8a77-6909cc2689a2\") " pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.289347 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-48f9l"] Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.298829 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.298920 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299010 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299037 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299063 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299084 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299193 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.299227 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98n6\" (UniqueName: \"kubernetes.io/projected/f3628fb9-23a7-47e6-853a-e8f31311916f-kube-api-access-v98n6\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.338580 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cfeafb-f53b-45fa-bbf7-ab056686a65d" path="/var/lib/kubelet/pods/30cfeafb-f53b-45fa-bbf7-ab056686a65d/volumes" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.339554 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767cebf8-00c4-4519-b794-816cf5b6fc69" path="/var/lib/kubelet/pods/767cebf8-00c4-4519-b794-816cf5b6fc69/volumes" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.342130 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400587 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400650 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400742 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400770 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400795 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400815 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400862 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400869 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.400901 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98n6\" (UniqueName: \"kubernetes.io/projected/f3628fb9-23a7-47e6-853a-e8f31311916f-kube-api-access-v98n6\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.401348 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.402010 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-config\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.402184 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3628fb9-23a7-47e6-853a-e8f31311916f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.404831 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.405117 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.405515 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3628fb9-23a7-47e6-853a-e8f31311916f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.426580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98n6\" (UniqueName: \"kubernetes.io/projected/f3628fb9-23a7-47e6-853a-e8f31311916f-kube-api-access-v98n6\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.435585 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f3628fb9-23a7-47e6-853a-e8f31311916f\") " pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.556706 5000 generic.go:334] "Generic (PLEG): container finished" podID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerID="13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f" exitCode=0 Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.556832 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" event={"ID":"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d","Type":"ContainerDied","Data":"13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f"} Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.558494 5000 generic.go:334] "Generic (PLEG): container finished" podID="f818c898-5db5-41e5-9614-8f58fcaca803" containerID="789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10" exitCode=0 Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.558521 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" event={"ID":"f818c898-5db5-41e5-9614-8f58fcaca803","Type":"ContainerDied","Data":"789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10"} Jan 05 21:49:31 crc kubenswrapper[5000]: I0105 21:49:31.563963 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:31 crc kubenswrapper[5000]: W0105 21:49:31.580170 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f01d9e3_692b_4648_b57f_3fb13e84379a.slice/crio-f2bb926735d17b0de82f632d5c073ad6ca6f5f5bbd02156b1ec119411d6ff886 WatchSource:0}: Error finding container f2bb926735d17b0de82f632d5c073ad6ca6f5f5bbd02156b1ec119411d6ff886: Status 404 returned error can't find the container with id f2bb926735d17b0de82f632d5c073ad6ca6f5f5bbd02156b1ec119411d6ff886 Jan 05 21:49:32 crc kubenswrapper[5000]: I0105 21:49:32.266650 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 05 21:49:32 crc kubenswrapper[5000]: I0105 21:49:32.569015 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-48f9l" event={"ID":"2f01d9e3-692b-4648-b57f-3fb13e84379a","Type":"ContainerStarted","Data":"f2bb926735d17b0de82f632d5c073ad6ca6f5f5bbd02156b1ec119411d6ff886"} Jan 05 21:49:32 crc kubenswrapper[5000]: I0105 21:49:32.620958 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.084053 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.091634 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.094087 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.128542 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.128593 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rjq\" (UniqueName: \"kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.128628 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: W0105 21:49:33.143108 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3628fb9_23a7_47e6_853a_e8f31311916f.slice/crio-3d135fa81fd89af178d5da827ac54c78fc3b965591731c1920378a573d24b767 WatchSource:0}: Error finding container 3d135fa81fd89af178d5da827ac54c78fc3b965591731c1920378a573d24b767: Status 404 returned error can't find the container with id 3d135fa81fd89af178d5da827ac54c78fc3b965591731c1920378a573d24b767 Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.230570 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.231113 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.231140 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rjq\" (UniqueName: \"kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.231498 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.231721 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.250012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rjq\" (UniqueName: \"kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq\") pod \"redhat-operators-x7dkd\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.414975 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.577511 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3628fb9-23a7-47e6-853a-e8f31311916f","Type":"ContainerStarted","Data":"3d135fa81fd89af178d5da827ac54c78fc3b965591731c1920378a573d24b767"} Jan 05 21:49:33 crc kubenswrapper[5000]: I0105 21:49:33.578823 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3e42459b-9f2f-45c6-8a77-6909cc2689a2","Type":"ContainerStarted","Data":"9db9c647a03727a35527058535c4bd6cd4cb1a9f0b48b362e1d8eab6dad2261b"} Jan 05 21:49:38 crc kubenswrapper[5000]: I0105 21:49:38.085639 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:38 crc kubenswrapper[5000]: I0105 21:49:38.613122 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerStarted","Data":"56cd64cf09af9dbf48fedd3481d3199619d0d533a74ffa5699bdc64499e41564"} Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.631429 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" event={"ID":"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d","Type":"ContainerStarted","Data":"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7"} Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.632221 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.633418 5000 generic.go:334] "Generic (PLEG): container finished" podID="df532b2d-cd12-4402-97a1-57fbe103805b" containerID="29b3459dd7c26e90b6f708672ccab083e8dbfd99c2005ce05871a6064541a952" exitCode=0 Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.633538 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerDied","Data":"29b3459dd7c26e90b6f708672ccab083e8dbfd99c2005ce05871a6064541a952"} Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.639677 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" event={"ID":"f818c898-5db5-41e5-9614-8f58fcaca803","Type":"ContainerStarted","Data":"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97"} Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.640176 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.671025 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" podStartSLOduration=23.217165477 podStartE2EDuration="23.67098079s" podCreationTimestamp="2026-01-05 21:49:17 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.0326971 +0000 UTC m=+924.988899569" lastFinishedPulling="2026-01-05 21:49:30.486512413 +0000 UTC m=+925.442714882" observedRunningTime="2026-01-05 21:49:40.664849355 +0000 UTC m=+935.621051854" watchObservedRunningTime="2026-01-05 21:49:40.67098079 +0000 UTC m=+935.627183269" Jan 05 21:49:40 crc kubenswrapper[5000]: I0105 21:49:40.709267 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" podStartSLOduration=23.252512114 podStartE2EDuration="23.70923108s" podCreationTimestamp="2026-01-05 21:49:17 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.057477216 +0000 UTC m=+925.013679685" lastFinishedPulling="2026-01-05 21:49:30.514196182 +0000 UTC m=+925.470398651" observedRunningTime="2026-01-05 21:49:40.703881107 +0000 UTC m=+935.660083596" watchObservedRunningTime="2026-01-05 21:49:40.70923108 +0000 UTC m=+935.665433549" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.648009 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerStarted","Data":"af231ca02683df2a57ad6222bd1109d5d3b597c0c7de112a1efd70dd203cc63f"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.649498 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43e574d5-969c-40aa-abd6-69f81feef2c5","Type":"ContainerStarted","Data":"d9e09b5e3cd21a8dbcd2df03dc207ade506d71dda5496ccc3462a3a79c16d43b"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.651497 5000 generic.go:334] "Generic (PLEG): container finished" podID="4e574607-e42c-4140-b43a-379ba76f4e73" containerID="4b1f494838b9866ab5ff2907ef10d01210b62ad59363adcf72cfbe10b0df2bc5" exitCode=0 Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.651528 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgdx9" event={"ID":"4e574607-e42c-4140-b43a-379ba76f4e73","Type":"ContainerDied","Data":"4b1f494838b9866ab5ff2907ef10d01210b62ad59363adcf72cfbe10b0df2bc5"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.653268 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b7b36978-e904-42dc-b2e9-cfd481f5b6f0","Type":"ContainerStarted","Data":"3a25d948fa0b739ce158bf1a0e9e4cf6e91f804ce028d251167b42fff10ecc74"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.653390 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.656677 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb55e4be-34e2-4649-aa6a-24b2019cc9cf","Type":"ContainerStarted","Data":"3f752c2eff94cb9a446f91497783bbb92f7c89190c49db55c6780d7befd57f4d"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.658070 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerStarted","Data":"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.660183 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3e42459b-9f2f-45c6-8a77-6909cc2689a2","Type":"ContainerStarted","Data":"af4f728027c92f5098569b93b35e5b8151bcd7f95731faa4f85dbe88a8323ca8"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.660216 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3e42459b-9f2f-45c6-8a77-6909cc2689a2","Type":"ContainerStarted","Data":"6cadde96ed54929615bcb58f1aa39a7cadea931f0c7a0711960029f9062b2eb6"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.661309 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-48f9l" event={"ID":"2f01d9e3-692b-4648-b57f-3fb13e84379a","Type":"ContainerStarted","Data":"f191be58d46eb30ac4979899951a5a4c44669db6176160d5dbc0934932e4c290"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.662785 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3628fb9-23a7-47e6-853a-e8f31311916f","Type":"ContainerStarted","Data":"46e7a4130b1205cb025be101171433793be5c4100ce386ef85432fa525fb0e74"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.662813 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f3628fb9-23a7-47e6-853a-e8f31311916f","Type":"ContainerStarted","Data":"739e9cd0701d3dadd6f5157caa8fcd963879c4455049ac6a98504959e4d8ecdd"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.664960 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be433e4-7178-4637-922d-9d1d455b7f76","Type":"ContainerStarted","Data":"c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.665114 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.666195 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6" event={"ID":"30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1","Type":"ContainerStarted","Data":"960663a6ac060e2eb7e3dcfb876c42bb7ac119d73f76f82e89eb76dfbf28d2b5"} Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.715840 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-48f9l" podStartSLOduration=3.265174467 podStartE2EDuration="11.715818665s" podCreationTimestamp="2026-01-05 21:49:30 +0000 UTC" firstStartedPulling="2026-01-05 21:49:31.590432921 +0000 UTC m=+926.546635390" lastFinishedPulling="2026-01-05 21:49:40.041077119 +0000 UTC m=+934.997279588" observedRunningTime="2026-01-05 21:49:41.709644049 +0000 UTC m=+936.665846518" watchObservedRunningTime="2026-01-05 21:49:41.715818665 +0000 UTC m=+936.672021134" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.823350 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.039408696 podStartE2EDuration="11.823329368s" podCreationTimestamp="2026-01-05 21:49:30 +0000 UTC" firstStartedPulling="2026-01-05 21:49:33.1477649 +0000 UTC m=+928.103967359" lastFinishedPulling="2026-01-05 21:49:39.931685562 +0000 UTC m=+934.887888031" observedRunningTime="2026-01-05 21:49:41.814856377 +0000 UTC m=+936.771058856" watchObservedRunningTime="2026-01-05 21:49:41.823329368 +0000 UTC m=+936.779531837" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.839811 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.801419097 podStartE2EDuration="20.839788547s" podCreationTimestamp="2026-01-05 21:49:21 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.06392588 +0000 UTC m=+925.020128349" lastFinishedPulling="2026-01-05 21:49:38.10229532 +0000 UTC m=+933.058497799" observedRunningTime="2026-01-05 21:49:41.831738588 +0000 UTC m=+936.787941047" watchObservedRunningTime="2026-01-05 21:49:41.839788547 +0000 UTC m=+936.795991016" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.899913 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.995085263 podStartE2EDuration="17.89988177s" podCreationTimestamp="2026-01-05 21:49:24 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.095432878 +0000 UTC m=+925.051635337" lastFinishedPulling="2026-01-05 21:49:40.000229365 +0000 UTC m=+934.956431844" observedRunningTime="2026-01-05 21:49:41.885373586 +0000 UTC m=+936.841576065" watchObservedRunningTime="2026-01-05 21:49:41.89988177 +0000 UTC m=+936.856084239" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.941546 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.02510707 podStartE2EDuration="12.941527097s" podCreationTimestamp="2026-01-05 21:49:29 +0000 UTC" firstStartedPulling="2026-01-05 21:49:33.029042637 +0000 UTC m=+927.985245106" lastFinishedPulling="2026-01-05 21:49:39.945462664 +0000 UTC m=+934.901665133" observedRunningTime="2026-01-05 21:49:41.924742148 +0000 UTC m=+936.880944617" watchObservedRunningTime="2026-01-05 21:49:41.941527097 +0000 UTC m=+936.897729566" Jan 05 21:49:41 crc kubenswrapper[5000]: I0105 21:49:41.961012 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qtwd6" podStartSLOduration=5.450096735 podStartE2EDuration="13.960987441s" podCreationTimestamp="2026-01-05 21:49:28 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.057120956 +0000 UTC m=+925.013323425" lastFinishedPulling="2026-01-05 21:49:38.568011662 +0000 UTC m=+933.524214131" observedRunningTime="2026-01-05 21:49:41.958172841 +0000 UTC m=+936.914375310" watchObservedRunningTime="2026-01-05 21:49:41.960987441 +0000 UTC m=+936.917189910" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.076572 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.123247 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.124650 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.126419 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.147323 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.285907 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.286131 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlrj\" (UniqueName: \"kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.286265 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.286306 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.290783 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.328080 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.329812 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.332884 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.353410 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.388031 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.388112 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvlrj\" (UniqueName: \"kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.388162 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.388184 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.388986 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.389507 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.390282 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.409596 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvlrj\" (UniqueName: \"kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj\") pod \"dnsmasq-dns-7fd796d7df-vzrfk\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.439661 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.489584 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.489697 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.489749 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.489809 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfssd\" (UniqueName: \"kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.489837 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.591415 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.591804 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.591855 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfssd\" (UniqueName: \"kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.591881 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.591970 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.593067 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.593397 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.593720 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.594802 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.616479 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfssd\" (UniqueName: \"kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd\") pod \"dnsmasq-dns-86db49b7ff-ht7kt\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.649782 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.679524 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerStarted","Data":"062875dad6d5113d1f5bd6b21328dc34a4b5e7d53611e86f0fbaa3ad95c3fc31"} Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.688475 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgdx9" event={"ID":"4e574607-e42c-4140-b43a-379ba76f4e73","Type":"ContainerStarted","Data":"1eac2dcbd1d31251b7bb193f0a24c8a0232c0f79dd73c711fea9e0372fc78eac"} Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.688865 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgdx9" event={"ID":"4e574607-e42c-4140-b43a-379ba76f4e73","Type":"ContainerStarted","Data":"e4aed74c071f3adae4aa91469bc9f5f97c5bc24a7d4b490b71e1d25ab2b995cf"} Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.689204 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="dnsmasq-dns" containerID="cri-o://2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7" gracePeriod=10 Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.689479 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qtwd6" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.690781 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="dnsmasq-dns" containerID="cri-o://2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97" gracePeriod=10 Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.734387 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cgdx9" podStartSLOduration=6.939419436 podStartE2EDuration="14.734364719s" podCreationTimestamp="2026-01-05 21:49:28 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.198881716 +0000 UTC m=+925.155084185" lastFinishedPulling="2026-01-05 21:49:37.993826999 +0000 UTC m=+932.950029468" observedRunningTime="2026-01-05 21:49:42.728798771 +0000 UTC m=+937.685001260" watchObservedRunningTime="2026-01-05 21:49:42.734364719 +0000 UTC m=+937.690567198" Jan 05 21:49:42 crc kubenswrapper[5000]: I0105 21:49:42.919232 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.142980 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.187043 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.305158 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config\") pod \"f818c898-5db5-41e5-9614-8f58fcaca803\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.305775 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc\") pod \"f818c898-5db5-41e5-9614-8f58fcaca803\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.305830 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kntzs\" (UniqueName: \"kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs\") pod \"f818c898-5db5-41e5-9614-8f58fcaca803\" (UID: \"f818c898-5db5-41e5-9614-8f58fcaca803\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.313711 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs" (OuterVolumeSpecName: "kube-api-access-kntzs") pod "f818c898-5db5-41e5-9614-8f58fcaca803" (UID: "f818c898-5db5-41e5-9614-8f58fcaca803"). InnerVolumeSpecName "kube-api-access-kntzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.341489 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config" (OuterVolumeSpecName: "config") pod "f818c898-5db5-41e5-9614-8f58fcaca803" (UID: "f818c898-5db5-41e5-9614-8f58fcaca803"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.342533 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f818c898-5db5-41e5-9614-8f58fcaca803" (UID: "f818c898-5db5-41e5-9614-8f58fcaca803"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.350468 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.353642 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.388329 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.407226 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config\") pod \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.407491 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvv6x\" (UniqueName: \"kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x\") pod \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.407559 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc\") pod \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\" (UID: \"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d\") " Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.408158 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.408176 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f818c898-5db5-41e5-9614-8f58fcaca803-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.408190 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kntzs\" (UniqueName: \"kubernetes.io/projected/f818c898-5db5-41e5-9614-8f58fcaca803-kube-api-access-kntzs\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.426820 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x" (OuterVolumeSpecName: "kube-api-access-pvv6x") pod "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" (UID: "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d"). InnerVolumeSpecName "kube-api-access-pvv6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.454357 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config" (OuterVolumeSpecName: "config") pod "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" (UID: "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.458629 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" (UID: "9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.510967 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.511019 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvv6x\" (UniqueName: \"kubernetes.io/projected/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-kube-api-access-pvv6x\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.511036 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.564686 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.696663 5000 generic.go:334] "Generic (PLEG): container finished" podID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerID="2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7" exitCode=0 Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.696735 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" event={"ID":"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d","Type":"ContainerDied","Data":"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.696759 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.696780 5000 scope.go:117] "RemoveContainer" containerID="2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.696766 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2fxmv" event={"ID":"9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d","Type":"ContainerDied","Data":"1416de5f249982a6aa08c27f9a637777fb93c7985b1d0310742d79b355216d15"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.699942 5000 generic.go:334] "Generic (PLEG): container finished" podID="df532b2d-cd12-4402-97a1-57fbe103805b" containerID="062875dad6d5113d1f5bd6b21328dc34a4b5e7d53611e86f0fbaa3ad95c3fc31" exitCode=0 Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.700038 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerDied","Data":"062875dad6d5113d1f5bd6b21328dc34a4b5e7d53611e86f0fbaa3ad95c3fc31"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.705190 5000 generic.go:334] "Generic (PLEG): container finished" podID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerID="5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2" exitCode=0 Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.705249 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" event={"ID":"d7e0cb5f-226c-4617-a92f-f87b8e595498","Type":"ContainerDied","Data":"5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.705268 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" event={"ID":"d7e0cb5f-226c-4617-a92f-f87b8e595498","Type":"ContainerStarted","Data":"fcc08e2a75ab464583492ffe79c329149d1b7c7f78918e95a4888ad81386f20b"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.713082 5000 generic.go:334] "Generic (PLEG): container finished" podID="f818c898-5db5-41e5-9614-8f58fcaca803" containerID="2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97" exitCode=0 Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.713193 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" event={"ID":"f818c898-5db5-41e5-9614-8f58fcaca803","Type":"ContainerDied","Data":"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.713263 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" event={"ID":"f818c898-5db5-41e5-9614-8f58fcaca803","Type":"ContainerDied","Data":"357a7f87815c922b0e9b4ca410d4e3f5e8da941c7e4814b119b795bb6177dc81"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.715061 5000 generic.go:334] "Generic (PLEG): container finished" podID="64ead72c-36ef-416a-b028-2f4344d62508" containerID="b0c1926b446e00a4fc12ce12dedef410d4a9183b73866b5442d6affe3f109cda" exitCode=0 Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.715983 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zn9f4" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.716069 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" event={"ID":"64ead72c-36ef-416a-b028-2f4344d62508","Type":"ContainerDied","Data":"b0c1926b446e00a4fc12ce12dedef410d4a9183b73866b5442d6affe3f109cda"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.716098 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" event={"ID":"64ead72c-36ef-416a-b028-2f4344d62508","Type":"ContainerStarted","Data":"56aacc02292e3595215dcb75fde798e0eccee24f560f41a106953e0a728282bc"} Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.716793 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.716953 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.716971 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.723291 5000 scope.go:117] "RemoveContainer" containerID="13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.816003 5000 scope.go:117] "RemoveContainer" containerID="2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7" Jan 05 21:49:43 crc kubenswrapper[5000]: E0105 21:49:43.819518 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7\": container with ID starting with 2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7 not found: ID does not exist" containerID="2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.819599 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7"} err="failed to get container status \"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7\": rpc error: code = NotFound desc = could not find container \"2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7\": container with ID starting with 2fb7433046e6943f3494470cbb5e05c39751ca24fa3a1d678742f885aaf3bad7 not found: ID does not exist" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.819633 5000 scope.go:117] "RemoveContainer" containerID="13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f" Jan 05 21:49:43 crc kubenswrapper[5000]: E0105 21:49:43.840097 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f\": container with ID starting with 13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f not found: ID does not exist" containerID="13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.840177 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f"} err="failed to get container status \"13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f\": rpc error: code = NotFound desc = could not find container \"13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f\": container with ID starting with 13b4a9d8a5721dd2f56c8c39df0f7815a06660d82a5bfe14fbe48ed1a3c1e63f not found: ID does not exist" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.840215 5000 scope.go:117] "RemoveContainer" containerID="2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.890570 5000 scope.go:117] "RemoveContainer" containerID="789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.895197 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.903161 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zn9f4"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.909862 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.916087 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2fxmv"] Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.919959 5000 scope.go:117] "RemoveContainer" containerID="2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97" Jan 05 21:49:43 crc kubenswrapper[5000]: E0105 21:49:43.920531 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97\": container with ID starting with 2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97 not found: ID does not exist" containerID="2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.920601 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97"} err="failed to get container status \"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97\": rpc error: code = NotFound desc = could not find container \"2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97\": container with ID starting with 2f675d5fc0494ee67906c4337232e102fa097af6880e2028fb911ef9f7013b97 not found: ID does not exist" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.920629 5000 scope.go:117] "RemoveContainer" containerID="789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10" Jan 05 21:49:43 crc kubenswrapper[5000]: E0105 21:49:43.921133 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10\": container with ID starting with 789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10 not found: ID does not exist" containerID="789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10" Jan 05 21:49:43 crc kubenswrapper[5000]: I0105 21:49:43.921175 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10"} err="failed to get container status \"789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10\": rpc error: code = NotFound desc = could not find container \"789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10\": container with ID starting with 789ca483b0f24c9b7e0dd09976e8ce02fc6458136a89032111b48a6606ce2e10 not found: ID does not exist" Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.728052 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerStarted","Data":"ccd6866d49c37c86e1d35bd577ffe79efeefb3de97bc7ce9682208eacb87eba0"} Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.729638 5000 generic.go:334] "Generic (PLEG): container finished" podID="43e574d5-969c-40aa-abd6-69f81feef2c5" containerID="d9e09b5e3cd21a8dbcd2df03dc207ade506d71dda5496ccc3462a3a79c16d43b" exitCode=0 Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.729913 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43e574d5-969c-40aa-abd6-69f81feef2c5","Type":"ContainerDied","Data":"d9e09b5e3cd21a8dbcd2df03dc207ade506d71dda5496ccc3462a3a79c16d43b"} Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.733370 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" event={"ID":"d7e0cb5f-226c-4617-a92f-f87b8e595498","Type":"ContainerStarted","Data":"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90"} Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.733554 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.739702 5000 generic.go:334] "Generic (PLEG): container finished" podID="eb55e4be-34e2-4649-aa6a-24b2019cc9cf" containerID="3f752c2eff94cb9a446f91497783bbb92f7c89190c49db55c6780d7befd57f4d" exitCode=0 Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.739800 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb55e4be-34e2-4649-aa6a-24b2019cc9cf","Type":"ContainerDied","Data":"3f752c2eff94cb9a446f91497783bbb92f7c89190c49db55c6780d7befd57f4d"} Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.750641 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" event={"ID":"64ead72c-36ef-416a-b028-2f4344d62508","Type":"ContainerStarted","Data":"4a8bd664abefe4b1a459c78707ec3184f0dbaedfd8c5a00b908bf482ae319e0e"} Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.751632 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.818647 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7dkd" podStartSLOduration=8.246413398 podStartE2EDuration="11.818620485s" podCreationTimestamp="2026-01-05 21:49:33 +0000 UTC" firstStartedPulling="2026-01-05 21:49:40.638999928 +0000 UTC m=+935.595202417" lastFinishedPulling="2026-01-05 21:49:44.211207045 +0000 UTC m=+939.167409504" observedRunningTime="2026-01-05 21:49:44.788487056 +0000 UTC m=+939.744689535" watchObservedRunningTime="2026-01-05 21:49:44.818620485 +0000 UTC m=+939.774822954" Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.821634 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" podStartSLOduration=2.821623881 podStartE2EDuration="2.821623881s" podCreationTimestamp="2026-01-05 21:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:49:44.812352346 +0000 UTC m=+939.768554825" watchObservedRunningTime="2026-01-05 21:49:44.821623881 +0000 UTC m=+939.777826350" Jan 05 21:49:44 crc kubenswrapper[5000]: I0105 21:49:44.841277 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" podStartSLOduration=2.84125411 podStartE2EDuration="2.84125411s" podCreationTimestamp="2026-01-05 21:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:49:44.836473734 +0000 UTC m=+939.792676213" watchObservedRunningTime="2026-01-05 21:49:44.84125411 +0000 UTC m=+939.797456579" Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.333408 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" path="/var/lib/kubelet/pods/9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d/volumes" Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.333971 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" path="/var/lib/kubelet/pods/f818c898-5db5-41e5-9614-8f58fcaca803/volumes" Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.767714 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"43e574d5-969c-40aa-abd6-69f81feef2c5","Type":"ContainerStarted","Data":"8dffce023cc2d66e083f74b74196b9846823724f4c2a2a5c425e631cb95bc353"} Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.770081 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb55e4be-34e2-4649-aa6a-24b2019cc9cf","Type":"ContainerStarted","Data":"2d4a7da0c53ffa116964cb0377ce895b15a8212f9acaf9761a0b4d8949219624"} Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.789995 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.859134191 podStartE2EDuration="25.789962406s" podCreationTimestamp="2026-01-05 21:49:20 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.060478942 +0000 UTC m=+925.016681421" lastFinishedPulling="2026-01-05 21:49:37.991307167 +0000 UTC m=+932.947509636" observedRunningTime="2026-01-05 21:49:45.787842295 +0000 UTC m=+940.744044804" watchObservedRunningTime="2026-01-05 21:49:45.789962406 +0000 UTC m=+940.746164885" Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.815558 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.912385174 podStartE2EDuration="27.815537104s" podCreationTimestamp="2026-01-05 21:49:18 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.028525321 +0000 UTC m=+924.984727790" lastFinishedPulling="2026-01-05 21:49:39.931677251 +0000 UTC m=+934.887879720" observedRunningTime="2026-01-05 21:49:45.812299362 +0000 UTC m=+940.768501841" watchObservedRunningTime="2026-01-05 21:49:45.815537104 +0000 UTC m=+940.771739573" Jan 05 21:49:45 crc kubenswrapper[5000]: I0105 21:49:45.819806 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174124 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:49:46 crc kubenswrapper[5000]: E0105 21:49:46.174470 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174493 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: E0105 21:49:46.174518 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="init" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174527 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="init" Jan 05 21:49:46 crc kubenswrapper[5000]: E0105 21:49:46.174565 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174573 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: E0105 21:49:46.174599 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="init" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174606 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="init" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174767 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa7fd7a-5842-4fd1-b85b-ddae5f9cc23d" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.174788 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f818c898-5db5-41e5-9614-8f58fcaca803" containerName="dnsmasq-dns" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.176359 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.189519 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.351970 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc4m8\" (UniqueName: \"kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.352389 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.352511 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.453664 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.453722 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.453763 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc4m8\" (UniqueName: \"kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.454273 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.455070 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.474222 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc4m8\" (UniqueName: \"kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8\") pod \"certified-operators-5t566\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.495724 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.564448 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.618825 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.843467 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 05 21:49:46 crc kubenswrapper[5000]: I0105 21:49:46.963958 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:49:46 crc kubenswrapper[5000]: W0105 21:49:46.972008 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0bfec2_b111_430b_a47b_f8f8a661b594.slice/crio-72cd40c508a14fa587d59bf2529f9587aba953693058840728a3dd5af10a17d2 WatchSource:0}: Error finding container 72cd40c508a14fa587d59bf2529f9587aba953693058840728a3dd5af10a17d2: Status 404 returned error can't find the container with id 72cd40c508a14fa587d59bf2529f9587aba953693058840728a3dd5af10a17d2 Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.050701 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.054350 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.057767 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pg8tm" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.059647 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.059693 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.059647 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.082319 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170617 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170654 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6hlq\" (UniqueName: \"kubernetes.io/projected/98ae3293-772a-4a0d-8b5e-245e02531e31-kube-api-access-z6hlq\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170673 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170709 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170762 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-scripts\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170802 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-config\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.170830 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.215062 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272029 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272118 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6hlq\" (UniqueName: \"kubernetes.io/projected/98ae3293-772a-4a0d-8b5e-245e02531e31-kube-api-access-z6hlq\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272136 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272181 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272244 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-scripts\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.272282 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-config\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.273268 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-config\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.274010 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.274089 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98ae3293-772a-4a0d-8b5e-245e02531e31-scripts\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.277913 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.278363 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.278562 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/98ae3293-772a-4a0d-8b5e-245e02531e31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.297343 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6hlq\" (UniqueName: \"kubernetes.io/projected/98ae3293-772a-4a0d-8b5e-245e02531e31-kube-api-access-z6hlq\") pod \"ovn-northd-0\" (UID: \"98ae3293-772a-4a0d-8b5e-245e02531e31\") " pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.469621 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.750032 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 05 21:49:47 crc kubenswrapper[5000]: W0105 21:49:47.755026 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98ae3293_772a_4a0d_8b5e_245e02531e31.slice/crio-a30fde2602af1df64f1aa108e21db72741cd33e65f3eee11529198d8786fc51b WatchSource:0}: Error finding container a30fde2602af1df64f1aa108e21db72741cd33e65f3eee11529198d8786fc51b: Status 404 returned error can't find the container with id a30fde2602af1df64f1aa108e21db72741cd33e65f3eee11529198d8786fc51b Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.789690 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"98ae3293-772a-4a0d-8b5e-245e02531e31","Type":"ContainerStarted","Data":"a30fde2602af1df64f1aa108e21db72741cd33e65f3eee11529198d8786fc51b"} Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.791955 5000 generic.go:334] "Generic (PLEG): container finished" podID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerID="b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db" exitCode=0 Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.792021 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerDied","Data":"b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db"} Jan 05 21:49:47 crc kubenswrapper[5000]: I0105 21:49:47.792091 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerStarted","Data":"72cd40c508a14fa587d59bf2529f9587aba953693058840728a3dd5af10a17d2"} Jan 05 21:49:48 crc kubenswrapper[5000]: I0105 21:49:48.802247 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerStarted","Data":"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569"} Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.810183 5000 generic.go:334] "Generic (PLEG): container finished" podID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerID="b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569" exitCode=0 Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.810272 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerDied","Data":"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569"} Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.813345 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"98ae3293-772a-4a0d-8b5e-245e02531e31","Type":"ContainerStarted","Data":"52357d2285ea1f1790705753ed2c12d8b640ab1d6ff707cb8f17563115e6be19"} Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.813387 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"98ae3293-772a-4a0d-8b5e-245e02531e31","Type":"ContainerStarted","Data":"fdf21bab5eac68e171a7a07ad1046cfec96473b7ee3dcb8af9f532a6a961b42e"} Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.813522 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 05 21:49:49 crc kubenswrapper[5000]: I0105 21:49:49.859644 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.6808137749999998 podStartE2EDuration="2.859627798s" podCreationTimestamp="2026-01-05 21:49:47 +0000 UTC" firstStartedPulling="2026-01-05 21:49:47.757822873 +0000 UTC m=+942.714025332" lastFinishedPulling="2026-01-05 21:49:48.936636886 +0000 UTC m=+943.892839355" observedRunningTime="2026-01-05 21:49:49.857183508 +0000 UTC m=+944.813385977" watchObservedRunningTime="2026-01-05 21:49:49.859627798 +0000 UTC m=+944.815830267" Jan 05 21:49:50 crc kubenswrapper[5000]: I0105 21:49:50.305498 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 05 21:49:50 crc kubenswrapper[5000]: I0105 21:49:50.306093 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 05 21:49:50 crc kubenswrapper[5000]: I0105 21:49:50.371820 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 05 21:49:50 crc kubenswrapper[5000]: I0105 21:49:50.904525 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 05 21:49:51 crc kubenswrapper[5000]: I0105 21:49:51.719583 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:51 crc kubenswrapper[5000]: I0105 21:49:51.719636 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:52 crc kubenswrapper[5000]: I0105 21:49:52.441104 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:52 crc kubenswrapper[5000]: I0105 21:49:52.651213 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:49:52 crc kubenswrapper[5000]: I0105 21:49:52.698201 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:52 crc kubenswrapper[5000]: I0105 21:49:52.836460 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="dnsmasq-dns" containerID="cri-o://4a8bd664abefe4b1a459c78707ec3184f0dbaedfd8c5a00b908bf482ae319e0e" gracePeriod=10 Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.099199 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.099483 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.415660 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.415992 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.463515 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.845725 5000 generic.go:334] "Generic (PLEG): container finished" podID="64ead72c-36ef-416a-b028-2f4344d62508" containerID="4a8bd664abefe4b1a459c78707ec3184f0dbaedfd8c5a00b908bf482ae319e0e" exitCode=0 Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.845748 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" event={"ID":"64ead72c-36ef-416a-b028-2f4344d62508","Type":"ContainerDied","Data":"4a8bd664abefe4b1a459c78707ec3184f0dbaedfd8c5a00b908bf482ae319e0e"} Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.906281 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.931518 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.933279 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.964525 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:49:53 crc kubenswrapper[5000]: I0105 21:49:53.980561 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.035750 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvd6j\" (UniqueName: \"kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.035845 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.035866 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.035900 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.035959 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.136917 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvd6j\" (UniqueName: \"kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.137022 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.137051 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.137075 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.137127 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.138040 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.138111 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.138252 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.138329 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.170681 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvd6j\" (UniqueName: \"kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j\") pod \"dnsmasq-dns-698758b865-k8prf\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.250201 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.545448 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.706640 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:49:54 crc kubenswrapper[5000]: W0105 21:49:54.717062 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf40f774_440a_4644_9324_66f2c7d2647e.slice/crio-f942763052fce2e67aa2a01e043e84b8033569bdbb1ce7fe240277e011e32248 WatchSource:0}: Error finding container f942763052fce2e67aa2a01e043e84b8033569bdbb1ce7fe240277e011e32248: Status 404 returned error can't find the container with id f942763052fce2e67aa2a01e043e84b8033569bdbb1ce7fe240277e011e32248 Jan 05 21:49:54 crc kubenswrapper[5000]: I0105 21:49:54.854852 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8prf" event={"ID":"bf40f774-440a-4644-9324-66f2c7d2647e","Type":"ContainerStarted","Data":"f942763052fce2e67aa2a01e043e84b8033569bdbb1ce7fe240277e011e32248"} Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.140882 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.150944 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.153126 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.153344 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.153739 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.157685 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.167307 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9fcrb" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.255441 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.255494 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-cache\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.255518 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vbd\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-kube-api-access-m2vbd\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.255623 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-lock\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.255654 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.357436 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.357487 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-cache\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.357515 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vbd\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-kube-api-access-m2vbd\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.357567 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-lock\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.357592 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.357690 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.357739 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.357794 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:49:55.857776539 +0000 UTC m=+950.813979008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.358070 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.358460 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-cache\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.358685 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-lock\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.389094 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vbd\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-kube-api-access-m2vbd\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.392524 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.860928 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7dkd" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="registry-server" containerID="cri-o://ccd6866d49c37c86e1d35bd577ffe79efeefb3de97bc7ce9682208eacb87eba0" gracePeriod=2 Jan 05 21:49:55 crc kubenswrapper[5000]: I0105 21:49:55.864391 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.864657 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.864682 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:49:55 crc kubenswrapper[5000]: E0105 21:49:55.864729 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:49:56.864713796 +0000 UTC m=+951.820916265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:49:56 crc kubenswrapper[5000]: I0105 21:49:56.870398 5000 generic.go:334] "Generic (PLEG): container finished" podID="df532b2d-cd12-4402-97a1-57fbe103805b" containerID="ccd6866d49c37c86e1d35bd577ffe79efeefb3de97bc7ce9682208eacb87eba0" exitCode=0 Jan 05 21:49:56 crc kubenswrapper[5000]: I0105 21:49:56.870446 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerDied","Data":"ccd6866d49c37c86e1d35bd577ffe79efeefb3de97bc7ce9682208eacb87eba0"} Jan 05 21:49:56 crc kubenswrapper[5000]: I0105 21:49:56.880482 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:56 crc kubenswrapper[5000]: E0105 21:49:56.880697 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:49:56 crc kubenswrapper[5000]: E0105 21:49:56.880761 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:49:56 crc kubenswrapper[5000]: E0105 21:49:56.880847 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:49:58.880819762 +0000 UTC m=+953.837022241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.423520 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.449118 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5fa5-account-create-update-frhwv"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.450253 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.452405 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.460182 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5fa5-account-create-update-frhwv"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.473775 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-h8f2j"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.476036 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.488057 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h8f2j"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.576292 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.596451 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kqn\" (UniqueName: \"kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.596524 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.596548 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.596602 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xpq\" (UniqueName: \"kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.602846 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.694624 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697371 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc\") pod \"64ead72c-36ef-416a-b028-2f4344d62508\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697440 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb\") pod \"64ead72c-36ef-416a-b028-2f4344d62508\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697536 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config\") pod \"64ead72c-36ef-416a-b028-2f4344d62508\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697633 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvlrj\" (UniqueName: \"kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj\") pod \"64ead72c-36ef-416a-b028-2f4344d62508\" (UID: \"64ead72c-36ef-416a-b028-2f4344d62508\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697826 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xpq\" (UniqueName: \"kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697930 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kqn\" (UniqueName: \"kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697968 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.697991 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.698642 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.701422 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.709019 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj" (OuterVolumeSpecName: "kube-api-access-xvlrj") pod "64ead72c-36ef-416a-b028-2f4344d62508" (UID: "64ead72c-36ef-416a-b028-2f4344d62508"). InnerVolumeSpecName "kube-api-access-xvlrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.727705 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xpq\" (UniqueName: \"kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq\") pod \"glance-5fa5-account-create-update-frhwv\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.742031 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kqn\" (UniqueName: \"kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn\") pod \"glance-db-create-h8f2j\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.761135 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64ead72c-36ef-416a-b028-2f4344d62508" (UID: "64ead72c-36ef-416a-b028-2f4344d62508"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.781521 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config" (OuterVolumeSpecName: "config") pod "64ead72c-36ef-416a-b028-2f4344d62508" (UID: "64ead72c-36ef-416a-b028-2f4344d62508"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.783541 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ead72c-36ef-416a-b028-2f4344d62508" (UID: "64ead72c-36ef-416a-b028-2f4344d62508"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799273 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities\") pod \"df532b2d-cd12-4402-97a1-57fbe103805b\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799403 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rjq\" (UniqueName: \"kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq\") pod \"df532b2d-cd12-4402-97a1-57fbe103805b\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799449 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content\") pod \"df532b2d-cd12-4402-97a1-57fbe103805b\" (UID: \"df532b2d-cd12-4402-97a1-57fbe103805b\") " Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799918 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799945 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvlrj\" (UniqueName: \"kubernetes.io/projected/64ead72c-36ef-416a-b028-2f4344d62508-kube-api-access-xvlrj\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799961 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.799975 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ead72c-36ef-416a-b028-2f4344d62508-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.801087 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities" (OuterVolumeSpecName: "utilities") pod "df532b2d-cd12-4402-97a1-57fbe103805b" (UID: "df532b2d-cd12-4402-97a1-57fbe103805b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.803843 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq" (OuterVolumeSpecName: "kube-api-access-s4rjq") pod "df532b2d-cd12-4402-97a1-57fbe103805b" (UID: "df532b2d-cd12-4402-97a1-57fbe103805b"). InnerVolumeSpecName "kube-api-access-s4rjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.865822 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.881392 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" event={"ID":"64ead72c-36ef-416a-b028-2f4344d62508","Type":"ContainerDied","Data":"56aacc02292e3595215dcb75fde798e0eccee24f560f41a106953e0a728282bc"} Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.881454 5000 scope.go:117] "RemoveContainer" containerID="4a8bd664abefe4b1a459c78707ec3184f0dbaedfd8c5a00b908bf482ae319e0e" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.881581 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.882834 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h8f2j" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.890206 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7dkd" event={"ID":"df532b2d-cd12-4402-97a1-57fbe103805b","Type":"ContainerDied","Data":"56cd64cf09af9dbf48fedd3481d3199619d0d533a74ffa5699bdc64499e41564"} Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.890278 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7dkd" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.899080 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerStarted","Data":"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149"} Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.900753 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.900772 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rjq\" (UniqueName: \"kubernetes.io/projected/df532b2d-cd12-4402-97a1-57fbe103805b-kube-api-access-s4rjq\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.905370 5000 generic.go:334] "Generic (PLEG): container finished" podID="bf40f774-440a-4644-9324-66f2c7d2647e" containerID="0d5345eff85d28aeb3abd8d27303d2e0ea714549381c9db8454cf007de1f0cd7" exitCode=0 Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.906544 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8prf" event={"ID":"bf40f774-440a-4644-9324-66f2c7d2647e","Type":"ContainerDied","Data":"0d5345eff85d28aeb3abd8d27303d2e0ea714549381c9db8454cf007de1f0cd7"} Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.931951 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5t566" podStartSLOduration=2.585763548 podStartE2EDuration="11.931933455s" podCreationTimestamp="2026-01-05 21:49:46 +0000 UTC" firstStartedPulling="2026-01-05 21:49:47.793905532 +0000 UTC m=+942.750107991" lastFinishedPulling="2026-01-05 21:49:57.140075429 +0000 UTC m=+952.096277898" observedRunningTime="2026-01-05 21:49:57.921965211 +0000 UTC m=+952.878167680" watchObservedRunningTime="2026-01-05 21:49:57.931933455 +0000 UTC m=+952.888135924" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.932399 5000 scope.go:117] "RemoveContainer" containerID="b0c1926b446e00a4fc12ce12dedef410d4a9183b73866b5442d6affe3f109cda" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.941644 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.942056 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df532b2d-cd12-4402-97a1-57fbe103805b" (UID: "df532b2d-cd12-4402-97a1-57fbe103805b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.947354 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vzrfk"] Jan 05 21:49:57 crc kubenswrapper[5000]: I0105 21:49:57.980306 5000 scope.go:117] "RemoveContainer" containerID="ccd6866d49c37c86e1d35bd577ffe79efeefb3de97bc7ce9682208eacb87eba0" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.003413 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df532b2d-cd12-4402-97a1-57fbe103805b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.007759 5000 scope.go:117] "RemoveContainer" containerID="062875dad6d5113d1f5bd6b21328dc34a4b5e7d53611e86f0fbaa3ad95c3fc31" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.047000 5000 scope.go:117] "RemoveContainer" containerID="29b3459dd7c26e90b6f708672ccab083e8dbfd99c2005ce05871a6064541a952" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.247962 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.249095 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7dkd"] Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.402075 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5fa5-account-create-update-frhwv"] Jan 05 21:49:58 crc kubenswrapper[5000]: W0105 21:49:58.422945 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0767d8af_09be_4773_abb0_0c31c01a4eda.slice/crio-de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80 WatchSource:0}: Error finding container de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80: Status 404 returned error can't find the container with id de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80 Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.533395 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-h8f2j"] Jan 05 21:49:58 crc kubenswrapper[5000]: W0105 21:49:58.545185 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf59c30fb_8d31_4a59_8ba3_ec838c4cd239.slice/crio-0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517 WatchSource:0}: Error finding container 0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517: Status 404 returned error can't find the container with id 0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517 Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907338 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q74mv"] Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.907674 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="dnsmasq-dns" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907686 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="dnsmasq-dns" Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.907723 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="extract-utilities" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907730 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="extract-utilities" Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.907744 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="extract-content" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907751 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="extract-content" Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.907766 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="init" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907772 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="init" Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.907789 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="registry-server" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907795 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="registry-server" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907958 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="dnsmasq-dns" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.907969 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" containerName="registry-server" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.908487 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.912544 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.916428 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8prf" event={"ID":"bf40f774-440a-4644-9324-66f2c7d2647e","Type":"ContainerStarted","Data":"61f168b9c80051e15502e7c92859ec86a555a8087198db14f4888a76dc6d70dd"} Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.916589 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.922117 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.922823 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.922838 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:49:58 crc kubenswrapper[5000]: E0105 21:49:58.922909 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:50:02.922874934 +0000 UTC m=+957.879077403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.923340 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q74mv"] Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.926646 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h8f2j" event={"ID":"f59c30fb-8d31-4a59-8ba3-ec838c4cd239","Type":"ContainerStarted","Data":"79bb225cf83550573870dfdbb6439c21937df62ce093c2f514d9322f09d22208"} Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.926717 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h8f2j" event={"ID":"f59c30fb-8d31-4a59-8ba3-ec838c4cd239","Type":"ContainerStarted","Data":"0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517"} Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.936546 5000 generic.go:334] "Generic (PLEG): container finished" podID="0767d8af-09be-4773-abb0-0c31c01a4eda" containerID="9c9fb7c88b8c8099a7622257d7fd053bb7481afbe0448e437d504f365a5cbba6" exitCode=0 Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.937876 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5fa5-account-create-update-frhwv" event={"ID":"0767d8af-09be-4773-abb0-0c31c01a4eda","Type":"ContainerDied","Data":"9c9fb7c88b8c8099a7622257d7fd053bb7481afbe0448e437d504f365a5cbba6"} Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.937963 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5fa5-account-create-update-frhwv" event={"ID":"0767d8af-09be-4773-abb0-0c31c01a4eda","Type":"ContainerStarted","Data":"de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80"} Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.973153 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-h8f2j" podStartSLOduration=1.973133416 podStartE2EDuration="1.973133416s" podCreationTimestamp="2026-01-05 21:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:49:58.962231156 +0000 UTC m=+953.918433625" watchObservedRunningTime="2026-01-05 21:49:58.973133416 +0000 UTC m=+953.929335885" Jan 05 21:49:58 crc kubenswrapper[5000]: I0105 21:49:58.981116 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-k8prf" podStartSLOduration=5.981099623 podStartE2EDuration="5.981099623s" podCreationTimestamp="2026-01-05 21:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:49:58.980548558 +0000 UTC m=+953.936751027" watchObservedRunningTime="2026-01-05 21:49:58.981099623 +0000 UTC m=+953.937302092" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.024701 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.024860 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqx44\" (UniqueName: \"kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.085151 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nkpzh"] Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.096583 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nkpzh"] Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.096706 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.098409 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.098680 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.099308 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126001 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126052 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126095 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126138 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126179 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126241 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126290 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zm6\" (UniqueName: \"kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126332 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqx44\" (UniqueName: \"kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126380 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.126904 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.144769 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqx44\" (UniqueName: \"kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44\") pod \"root-account-create-update-q74mv\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227410 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227463 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227488 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227531 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227562 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227604 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zm6\" (UniqueName: \"kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227660 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.227986 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.228336 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.228603 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.230311 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q74mv" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.230573 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.231664 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.231967 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.252031 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zm6\" (UniqueName: \"kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6\") pod \"swift-ring-rebalance-nkpzh\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.342479 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ead72c-36ef-416a-b028-2f4344d62508" path="/var/lib/kubelet/pods/64ead72c-36ef-416a-b028-2f4344d62508/volumes" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.343057 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df532b2d-cd12-4402-97a1-57fbe103805b" path="/var/lib/kubelet/pods/df532b2d-cd12-4402-97a1-57fbe103805b/volumes" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.424744 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.646631 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q74mv"] Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.891843 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nkpzh"] Jan 05 21:49:59 crc kubenswrapper[5000]: W0105 21:49:59.899414 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcee38b5_1aa2_4d3f_8545_dfc618226422.slice/crio-ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11 WatchSource:0}: Error finding container ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11: Status 404 returned error can't find the container with id ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11 Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.943859 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nkpzh" event={"ID":"bcee38b5-1aa2-4d3f-8545-dfc618226422","Type":"ContainerStarted","Data":"ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11"} Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.945838 5000 generic.go:334] "Generic (PLEG): container finished" podID="f59c30fb-8d31-4a59-8ba3-ec838c4cd239" containerID="79bb225cf83550573870dfdbb6439c21937df62ce093c2f514d9322f09d22208" exitCode=0 Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.945925 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h8f2j" event={"ID":"f59c30fb-8d31-4a59-8ba3-ec838c4cd239","Type":"ContainerDied","Data":"79bb225cf83550573870dfdbb6439c21937df62ce093c2f514d9322f09d22208"} Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.947598 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q74mv" event={"ID":"025535fa-3a87-4b59-9f09-caa528190248","Type":"ContainerStarted","Data":"260ef748ee1a072cda11b02968497e0db6064b1440c5c7d18503161ecb68b20a"} Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.947686 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q74mv" event={"ID":"025535fa-3a87-4b59-9f09-caa528190248","Type":"ContainerStarted","Data":"772ed8a3060bf02b637f4716f05af3c3d17ca4bed434fc4889323cf05d092a99"} Jan 05 21:49:59 crc kubenswrapper[5000]: I0105 21:49:59.977024 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-q74mv" podStartSLOduration=1.976998864 podStartE2EDuration="1.976998864s" podCreationTimestamp="2026-01-05 21:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:49:59.973328889 +0000 UTC m=+954.929531358" watchObservedRunningTime="2026-01-05 21:49:59.976998864 +0000 UTC m=+954.933201333" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.287829 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.450316 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xpq\" (UniqueName: \"kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq\") pod \"0767d8af-09be-4773-abb0-0c31c01a4eda\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.450730 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts\") pod \"0767d8af-09be-4773-abb0-0c31c01a4eda\" (UID: \"0767d8af-09be-4773-abb0-0c31c01a4eda\") " Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.451497 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0767d8af-09be-4773-abb0-0c31c01a4eda" (UID: "0767d8af-09be-4773-abb0-0c31c01a4eda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.457066 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq" (OuterVolumeSpecName: "kube-api-access-n2xpq") pod "0767d8af-09be-4773-abb0-0c31c01a4eda" (UID: "0767d8af-09be-4773-abb0-0c31c01a4eda"). InnerVolumeSpecName "kube-api-access-n2xpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.552602 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xpq\" (UniqueName: \"kubernetes.io/projected/0767d8af-09be-4773-abb0-0c31c01a4eda-kube-api-access-n2xpq\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.552637 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0767d8af-09be-4773-abb0-0c31c01a4eda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.956002 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5fa5-account-create-update-frhwv" event={"ID":"0767d8af-09be-4773-abb0-0c31c01a4eda","Type":"ContainerDied","Data":"de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80"} Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.956042 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1d2b666e98d71cef8b14337080ed533166b3cf5b72e604ba7090197f07ea80" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.956015 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5fa5-account-create-update-frhwv" Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.957605 5000 generic.go:334] "Generic (PLEG): container finished" podID="025535fa-3a87-4b59-9f09-caa528190248" containerID="260ef748ee1a072cda11b02968497e0db6064b1440c5c7d18503161ecb68b20a" exitCode=0 Jan 05 21:50:00 crc kubenswrapper[5000]: I0105 21:50:00.957650 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q74mv" event={"ID":"025535fa-3a87-4b59-9f09-caa528190248","Type":"ContainerDied","Data":"260ef748ee1a072cda11b02968497e0db6064b1440c5c7d18503161ecb68b20a"} Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.344840 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h8f2j" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.379632 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts\") pod \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.379759 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64kqn\" (UniqueName: \"kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn\") pod \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\" (UID: \"f59c30fb-8d31-4a59-8ba3-ec838c4cd239\") " Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.380261 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f59c30fb-8d31-4a59-8ba3-ec838c4cd239" (UID: "f59c30fb-8d31-4a59-8ba3-ec838c4cd239"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.383562 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn" (OuterVolumeSpecName: "kube-api-access-64kqn") pod "f59c30fb-8d31-4a59-8ba3-ec838c4cd239" (UID: "f59c30fb-8d31-4a59-8ba3-ec838c4cd239"). InnerVolumeSpecName "kube-api-access-64kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.480958 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.480989 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64kqn\" (UniqueName: \"kubernetes.io/projected/f59c30fb-8d31-4a59-8ba3-ec838c4cd239-kube-api-access-64kqn\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.722408 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4wcrs"] Jan 05 21:50:01 crc kubenswrapper[5000]: E0105 21:50:01.729019 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0767d8af-09be-4773-abb0-0c31c01a4eda" containerName="mariadb-account-create-update" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.729055 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0767d8af-09be-4773-abb0-0c31c01a4eda" containerName="mariadb-account-create-update" Jan 05 21:50:01 crc kubenswrapper[5000]: E0105 21:50:01.729069 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59c30fb-8d31-4a59-8ba3-ec838c4cd239" containerName="mariadb-database-create" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.729091 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59c30fb-8d31-4a59-8ba3-ec838c4cd239" containerName="mariadb-database-create" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.729276 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0767d8af-09be-4773-abb0-0c31c01a4eda" containerName="mariadb-account-create-update" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.729296 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59c30fb-8d31-4a59-8ba3-ec838c4cd239" containerName="mariadb-database-create" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.730017 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.737045 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4wcrs"] Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.789812 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72qz\" (UniqueName: \"kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.789915 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.826002 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3ea5-account-create-update-fl5lh"] Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.826971 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.829523 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.849045 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ea5-account-create-update-fl5lh"] Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.891012 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.891309 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72qz\" (UniqueName: \"kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.891356 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.891418 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8qz\" (UniqueName: \"kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.892211 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.910655 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72qz\" (UniqueName: \"kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz\") pod \"keystone-db-create-4wcrs\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.967216 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-h8f2j" event={"ID":"f59c30fb-8d31-4a59-8ba3-ec838c4cd239","Type":"ContainerDied","Data":"0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517"} Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.967254 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-h8f2j" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.967265 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0368b71ee32ae0c3095e01571b8353aa40f027505e6ac5b27a1beb34cb44f517" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.992500 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8qz\" (UniqueName: \"kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.992600 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:01 crc kubenswrapper[5000]: I0105 21:50:01.993802 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.013083 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8qz\" (UniqueName: \"kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz\") pod \"keystone-3ea5-account-create-update-fl5lh\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.040258 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-p9fkk"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.041275 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.048924 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p9fkk"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.060956 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.094624 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfkc\" (UniqueName: \"kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.094658 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.149417 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.158361 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f58-account-create-update-p696k"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.159480 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.161413 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.166157 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f58-account-create-update-p696k"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.196354 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfkc\" (UniqueName: \"kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.196395 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.196451 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.196478 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqpl\" (UniqueName: \"kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.197269 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.214915 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfkc\" (UniqueName: \"kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc\") pod \"placement-db-create-p9fkk\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.298103 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqpl\" (UniqueName: \"kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.298258 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.299220 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.313281 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqpl\" (UniqueName: \"kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl\") pod \"placement-6f58-account-create-update-p696k\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.395259 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.440779 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-vzrfk" podUID="64ead72c-36ef-416a-b028-2f4344d62508" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.482009 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.535489 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.686027 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zg424"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.687688 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.690407 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bqd9g" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.690726 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.695448 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zg424"] Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.704044 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.704160 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9x59\" (UniqueName: \"kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.704195 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.704237 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.805831 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.805965 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.806040 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9x59\" (UniqueName: \"kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.806058 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.809585 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.818441 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.820324 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:02 crc kubenswrapper[5000]: I0105 21:50:02.823458 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9x59\" (UniqueName: \"kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59\") pod \"glance-db-sync-zg424\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " pod="openstack/glance-db-sync-zg424" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.005359 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zg424" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.009340 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:50:03 crc kubenswrapper[5000]: E0105 21:50:03.009510 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:50:03 crc kubenswrapper[5000]: E0105 21:50:03.009533 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:50:03 crc kubenswrapper[5000]: E0105 21:50:03.009585 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:50:11.009567863 +0000 UTC m=+965.965770342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.824135 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q74mv" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.925424 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts\") pod \"025535fa-3a87-4b59-9f09-caa528190248\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.925795 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqx44\" (UniqueName: \"kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44\") pod \"025535fa-3a87-4b59-9f09-caa528190248\" (UID: \"025535fa-3a87-4b59-9f09-caa528190248\") " Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.928276 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "025535fa-3a87-4b59-9f09-caa528190248" (UID: "025535fa-3a87-4b59-9f09-caa528190248"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.929940 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44" (OuterVolumeSpecName: "kube-api-access-xqx44") pod "025535fa-3a87-4b59-9f09-caa528190248" (UID: "025535fa-3a87-4b59-9f09-caa528190248"). InnerVolumeSpecName "kube-api-access-xqx44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.985064 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q74mv" event={"ID":"025535fa-3a87-4b59-9f09-caa528190248","Type":"ContainerDied","Data":"772ed8a3060bf02b637f4716f05af3c3d17ca4bed434fc4889323cf05d092a99"} Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.985097 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772ed8a3060bf02b637f4716f05af3c3d17ca4bed434fc4889323cf05d092a99" Jan 05 21:50:03 crc kubenswrapper[5000]: I0105 21:50:03.985158 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q74mv" Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.027494 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/025535fa-3a87-4b59-9f09-caa528190248-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.027520 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqx44\" (UniqueName: \"kubernetes.io/projected/025535fa-3a87-4b59-9f09-caa528190248-kube-api-access-xqx44\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.252072 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.309942 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.310280 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="dnsmasq-dns" containerID="cri-o://205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90" gracePeriod=10 Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.373785 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ea5-account-create-update-fl5lh"] Jan 05 21:50:04 crc kubenswrapper[5000]: W0105 21:50:04.382864 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60c60f7d_7fa7_46a1_94fd_a9d5547a14f6.slice/crio-68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f WatchSource:0}: Error finding container 68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f: Status 404 returned error can't find the container with id 68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.460376 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4wcrs"] Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.650057 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f58-account-create-update-p696k"] Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.674445 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p9fkk"] Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.758629 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zg424"] Jan 05 21:50:04 crc kubenswrapper[5000]: I0105 21:50:04.975427 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.003787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58-account-create-update-p696k" event={"ID":"55530057-0b94-461e-a436-74813cb5ca59","Type":"ContainerStarted","Data":"80e948edab85ea93b26e3aa5eb49f1aabbc673dd673a6990490974322da046ba"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.005906 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ea5-account-create-update-fl5lh" event={"ID":"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6","Type":"ContainerStarted","Data":"14b42868e27b501ad1061e46d5cd836c1b8cbb70acf1ce55f08a3fe82ed35eb9"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.005945 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ea5-account-create-update-fl5lh" event={"ID":"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6","Type":"ContainerStarted","Data":"68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.007073 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zg424" event={"ID":"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f","Type":"ContainerStarted","Data":"059d7d568386eb4a2ff00d46968bd5639c52329dd272c188cf3b904249d9084c"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.008470 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nkpzh" event={"ID":"bcee38b5-1aa2-4d3f-8545-dfc618226422","Type":"ContainerStarted","Data":"a9ddb305516f27f2e023117bd5912b0c90d1546ab56315e838e8034a9ec489cd"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.009635 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p9fkk" event={"ID":"a4110eb0-802a-41f4-a920-dbc15a48cf98","Type":"ContainerStarted","Data":"a9ff90003027a84b5cac2a475c03f4ecb60d51f1e83fe62a9253853cfbe04525"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.010983 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wcrs" event={"ID":"13a76d52-5034-45e8-a156-448f54eaafaa","Type":"ContainerStarted","Data":"d5290dda34132c8bf757ea14eb389a8b3c8b4f01164a990ac6f30fb80a27df05"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.011024 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wcrs" event={"ID":"13a76d52-5034-45e8-a156-448f54eaafaa","Type":"ContainerStarted","Data":"ffabf31302a4e93ee186ad7f1c64151d9c8f25c7d33e6ab1b3ce0a566d66acdd"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.012672 5000 generic.go:334] "Generic (PLEG): container finished" podID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerID="205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90" exitCode=0 Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.012700 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" event={"ID":"d7e0cb5f-226c-4617-a92f-f87b8e595498","Type":"ContainerDied","Data":"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.012719 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" event={"ID":"d7e0cb5f-226c-4617-a92f-f87b8e595498","Type":"ContainerDied","Data":"fcc08e2a75ab464583492ffe79c329149d1b7c7f78918e95a4888ad81386f20b"} Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.012739 5000 scope.go:117] "RemoveContainer" containerID="205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.012853 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-ht7kt" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.030600 5000 scope.go:117] "RemoveContainer" containerID="5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.061495 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nkpzh" podStartSLOduration=2.043144365 podStartE2EDuration="6.061477006s" podCreationTimestamp="2026-01-05 21:49:59 +0000 UTC" firstStartedPulling="2026-01-05 21:49:59.901216354 +0000 UTC m=+954.857418823" lastFinishedPulling="2026-01-05 21:50:03.919548985 +0000 UTC m=+958.875751464" observedRunningTime="2026-01-05 21:50:05.053115977 +0000 UTC m=+960.009318446" watchObservedRunningTime="2026-01-05 21:50:05.061477006 +0000 UTC m=+960.017679475" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.081339 5000 scope.go:117] "RemoveContainer" containerID="205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90" Jan 05 21:50:05 crc kubenswrapper[5000]: E0105 21:50:05.083411 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90\": container with ID starting with 205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90 not found: ID does not exist" containerID="205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.083491 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90"} err="failed to get container status \"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90\": rpc error: code = NotFound desc = could not find container \"205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90\": container with ID starting with 205c606aa43c916b17cd0325f4355cde0a1c42efb6079dab8582d4ee04edfb90 not found: ID does not exist" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.083538 5000 scope.go:117] "RemoveContainer" containerID="5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2" Jan 05 21:50:05 crc kubenswrapper[5000]: E0105 21:50:05.084293 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2\": container with ID starting with 5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2 not found: ID does not exist" containerID="5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.084353 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2"} err="failed to get container status \"5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2\": rpc error: code = NotFound desc = could not find container \"5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2\": container with ID starting with 5dd16decbbbf902190435999c438cad700646a97a983f0b3353041901f3506d2 not found: ID does not exist" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.087272 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3ea5-account-create-update-fl5lh" podStartSLOduration=4.0872492 podStartE2EDuration="4.0872492s" podCreationTimestamp="2026-01-05 21:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:05.076302018 +0000 UTC m=+960.032504487" watchObservedRunningTime="2026-01-05 21:50:05.0872492 +0000 UTC m=+960.043451669" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.099720 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4wcrs" podStartSLOduration=4.099698045 podStartE2EDuration="4.099698045s" podCreationTimestamp="2026-01-05 21:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:05.095208277 +0000 UTC m=+960.051410746" watchObservedRunningTime="2026-01-05 21:50:05.099698045 +0000 UTC m=+960.055900514" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.146597 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb\") pod \"d7e0cb5f-226c-4617-a92f-f87b8e595498\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.146736 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc\") pod \"d7e0cb5f-226c-4617-a92f-f87b8e595498\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.146868 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb\") pod \"d7e0cb5f-226c-4617-a92f-f87b8e595498\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.146927 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config\") pod \"d7e0cb5f-226c-4617-a92f-f87b8e595498\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.146953 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfssd\" (UniqueName: \"kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd\") pod \"d7e0cb5f-226c-4617-a92f-f87b8e595498\" (UID: \"d7e0cb5f-226c-4617-a92f-f87b8e595498\") " Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.164927 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd" (OuterVolumeSpecName: "kube-api-access-dfssd") pod "d7e0cb5f-226c-4617-a92f-f87b8e595498" (UID: "d7e0cb5f-226c-4617-a92f-f87b8e595498"). InnerVolumeSpecName "kube-api-access-dfssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.214669 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config" (OuterVolumeSpecName: "config") pod "d7e0cb5f-226c-4617-a92f-f87b8e595498" (UID: "d7e0cb5f-226c-4617-a92f-f87b8e595498"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.215223 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7e0cb5f-226c-4617-a92f-f87b8e595498" (UID: "d7e0cb5f-226c-4617-a92f-f87b8e595498"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.223497 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7e0cb5f-226c-4617-a92f-f87b8e595498" (UID: "d7e0cb5f-226c-4617-a92f-f87b8e595498"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.227415 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7e0cb5f-226c-4617-a92f-f87b8e595498" (UID: "d7e0cb5f-226c-4617-a92f-f87b8e595498"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.253758 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.253798 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.253811 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfssd\" (UniqueName: \"kubernetes.io/projected/d7e0cb5f-226c-4617-a92f-f87b8e595498-kube-api-access-dfssd\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.253824 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.253836 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e0cb5f-226c-4617-a92f-f87b8e595498-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.432869 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q74mv"] Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.439479 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q74mv"] Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.445775 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:50:05 crc kubenswrapper[5000]: I0105 21:50:05.452110 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-ht7kt"] Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.021635 5000 generic.go:334] "Generic (PLEG): container finished" podID="13a76d52-5034-45e8-a156-448f54eaafaa" containerID="d5290dda34132c8bf757ea14eb389a8b3c8b4f01164a990ac6f30fb80a27df05" exitCode=0 Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.021972 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wcrs" event={"ID":"13a76d52-5034-45e8-a156-448f54eaafaa","Type":"ContainerDied","Data":"d5290dda34132c8bf757ea14eb389a8b3c8b4f01164a990ac6f30fb80a27df05"} Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.026379 5000 generic.go:334] "Generic (PLEG): container finished" podID="55530057-0b94-461e-a436-74813cb5ca59" containerID="9d64d2bdaf1244bc20d0a89b314aa6af0eb1d3f944d46266a1989ebe74de2b0f" exitCode=0 Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.026495 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58-account-create-update-p696k" event={"ID":"55530057-0b94-461e-a436-74813cb5ca59","Type":"ContainerDied","Data":"9d64d2bdaf1244bc20d0a89b314aa6af0eb1d3f944d46266a1989ebe74de2b0f"} Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.028484 5000 generic.go:334] "Generic (PLEG): container finished" podID="60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" containerID="14b42868e27b501ad1061e46d5cd836c1b8cbb70acf1ce55f08a3fe82ed35eb9" exitCode=0 Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.028575 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ea5-account-create-update-fl5lh" event={"ID":"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6","Type":"ContainerDied","Data":"14b42868e27b501ad1061e46d5cd836c1b8cbb70acf1ce55f08a3fe82ed35eb9"} Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.032548 5000 generic.go:334] "Generic (PLEG): container finished" podID="a4110eb0-802a-41f4-a920-dbc15a48cf98" containerID="86ac1be5b8fd07eb6a5b7511b1576fe354c871a85f0bf836d50b20fdc478950f" exitCode=0 Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.032619 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p9fkk" event={"ID":"a4110eb0-802a-41f4-a920-dbc15a48cf98","Type":"ContainerDied","Data":"86ac1be5b8fd07eb6a5b7511b1576fe354c871a85f0bf836d50b20fdc478950f"} Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.496307 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.496360 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:06 crc kubenswrapper[5000]: I0105 21:50:06.541681 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.144236 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.208613 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.333141 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025535fa-3a87-4b59-9f09-caa528190248" path="/var/lib/kubelet/pods/025535fa-3a87-4b59-9f09-caa528190248/volumes" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.334185 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" path="/var/lib/kubelet/pods/d7e0cb5f-226c-4617-a92f-f87b8e595498/volumes" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.545938 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.701785 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72qz\" (UniqueName: \"kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz\") pod \"13a76d52-5034-45e8-a156-448f54eaafaa\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.702296 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts\") pod \"13a76d52-5034-45e8-a156-448f54eaafaa\" (UID: \"13a76d52-5034-45e8-a156-448f54eaafaa\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.702853 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13a76d52-5034-45e8-a156-448f54eaafaa" (UID: "13a76d52-5034-45e8-a156-448f54eaafaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.703073 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13a76d52-5034-45e8-a156-448f54eaafaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.714757 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz" (OuterVolumeSpecName: "kube-api-access-h72qz") pod "13a76d52-5034-45e8-a156-448f54eaafaa" (UID: "13a76d52-5034-45e8-a156-448f54eaafaa"). InnerVolumeSpecName "kube-api-access-h72qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.797740 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.804908 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72qz\" (UniqueName: \"kubernetes.io/projected/13a76d52-5034-45e8-a156-448f54eaafaa-kube-api-access-h72qz\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.807270 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.827932 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.906095 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts\") pod \"55530057-0b94-461e-a436-74813cb5ca59\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.906562 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55530057-0b94-461e-a436-74813cb5ca59" (UID: "55530057-0b94-461e-a436-74813cb5ca59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.906635 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwqpl\" (UniqueName: \"kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl\") pod \"55530057-0b94-461e-a436-74813cb5ca59\" (UID: \"55530057-0b94-461e-a436-74813cb5ca59\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.906959 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts\") pod \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.907068 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl8qz\" (UniqueName: \"kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz\") pod \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\" (UID: \"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6\") " Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.907618 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55530057-0b94-461e-a436-74813cb5ca59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.908227 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" (UID: "60c60f7d-7fa7-46a1-94fd-a9d5547a14f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.910112 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl" (OuterVolumeSpecName: "kube-api-access-pwqpl") pod "55530057-0b94-461e-a436-74813cb5ca59" (UID: "55530057-0b94-461e-a436-74813cb5ca59"). InnerVolumeSpecName "kube-api-access-pwqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:07 crc kubenswrapper[5000]: I0105 21:50:07.910502 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz" (OuterVolumeSpecName: "kube-api-access-tl8qz") pod "60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" (UID: "60c60f7d-7fa7-46a1-94fd-a9d5547a14f6"). InnerVolumeSpecName "kube-api-access-tl8qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.008733 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts\") pod \"a4110eb0-802a-41f4-a920-dbc15a48cf98\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.008865 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxfkc\" (UniqueName: \"kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc\") pod \"a4110eb0-802a-41f4-a920-dbc15a48cf98\" (UID: \"a4110eb0-802a-41f4-a920-dbc15a48cf98\") " Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.009295 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4110eb0-802a-41f4-a920-dbc15a48cf98" (UID: "a4110eb0-802a-41f4-a920-dbc15a48cf98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.009419 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwqpl\" (UniqueName: \"kubernetes.io/projected/55530057-0b94-461e-a436-74813cb5ca59-kube-api-access-pwqpl\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.009446 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4110eb0-802a-41f4-a920-dbc15a48cf98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.009459 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.009471 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl8qz\" (UniqueName: \"kubernetes.io/projected/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6-kube-api-access-tl8qz\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.012525 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc" (OuterVolumeSpecName: "kube-api-access-zxfkc") pod "a4110eb0-802a-41f4-a920-dbc15a48cf98" (UID: "a4110eb0-802a-41f4-a920-dbc15a48cf98"). InnerVolumeSpecName "kube-api-access-zxfkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.047873 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4wcrs" event={"ID":"13a76d52-5034-45e8-a156-448f54eaafaa","Type":"ContainerDied","Data":"ffabf31302a4e93ee186ad7f1c64151d9c8f25c7d33e6ab1b3ce0a566d66acdd"} Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.047944 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffabf31302a4e93ee186ad7f1c64151d9c8f25c7d33e6ab1b3ce0a566d66acdd" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.047905 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4wcrs" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.050228 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f58-account-create-update-p696k" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.050247 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f58-account-create-update-p696k" event={"ID":"55530057-0b94-461e-a436-74813cb5ca59","Type":"ContainerDied","Data":"80e948edab85ea93b26e3aa5eb49f1aabbc673dd673a6990490974322da046ba"} Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.050289 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e948edab85ea93b26e3aa5eb49f1aabbc673dd673a6990490974322da046ba" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.051980 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ea5-account-create-update-fl5lh" event={"ID":"60c60f7d-7fa7-46a1-94fd-a9d5547a14f6","Type":"ContainerDied","Data":"68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f"} Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.052019 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ab8f6652c9e9fe64beb39b27071e193b55926d56acdc7905e09f2961b82d1f" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.052092 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ea5-account-create-update-fl5lh" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.054171 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p9fkk" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.054199 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p9fkk" event={"ID":"a4110eb0-802a-41f4-a920-dbc15a48cf98","Type":"ContainerDied","Data":"a9ff90003027a84b5cac2a475c03f4ecb60d51f1e83fe62a9253853cfbe04525"} Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.054216 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ff90003027a84b5cac2a475c03f4ecb60d51f1e83fe62a9253853cfbe04525" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.110567 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxfkc\" (UniqueName: \"kubernetes.io/projected/a4110eb0-802a-41f4-a920-dbc15a48cf98-kube-api-access-zxfkc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935337 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wwhvf"] Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935757 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a76d52-5034-45e8-a156-448f54eaafaa" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935777 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a76d52-5034-45e8-a156-448f54eaafaa" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935798 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4110eb0-802a-41f4-a920-dbc15a48cf98" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935807 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4110eb0-802a-41f4-a920-dbc15a48cf98" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935841 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025535fa-3a87-4b59-9f09-caa528190248" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935848 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="025535fa-3a87-4b59-9f09-caa528190248" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935860 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935866 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935876 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="init" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935881 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="init" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935909 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55530057-0b94-461e-a436-74813cb5ca59" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935917 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="55530057-0b94-461e-a436-74813cb5ca59" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: E0105 21:50:08.935930 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="dnsmasq-dns" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.935936 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="dnsmasq-dns" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936110 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="55530057-0b94-461e-a436-74813cb5ca59" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936129 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e0cb5f-226c-4617-a92f-f87b8e595498" containerName="dnsmasq-dns" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936144 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="025535fa-3a87-4b59-9f09-caa528190248" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936154 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" containerName="mariadb-account-create-update" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936162 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4110eb0-802a-41f4-a920-dbc15a48cf98" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936175 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a76d52-5034-45e8-a156-448f54eaafaa" containerName="mariadb-database-create" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.936927 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.938564 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 21:50:08 crc kubenswrapper[5000]: I0105 21:50:08.946039 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wwhvf"] Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.027842 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfkc\" (UniqueName: \"kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.028497 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.063837 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5t566" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="registry-server" containerID="cri-o://62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149" gracePeriod=2 Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.130020 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.130077 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmfkc\" (UniqueName: \"kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.131204 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.146761 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmfkc\" (UniqueName: \"kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc\") pod \"root-account-create-update-wwhvf\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.293321 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.597849 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.740254 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc4m8\" (UniqueName: \"kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8\") pod \"9e0bfec2-b111-430b-a47b-f8f8a661b594\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.740400 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities\") pod \"9e0bfec2-b111-430b-a47b-f8f8a661b594\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.740497 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content\") pod \"9e0bfec2-b111-430b-a47b-f8f8a661b594\" (UID: \"9e0bfec2-b111-430b-a47b-f8f8a661b594\") " Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.745376 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities" (OuterVolumeSpecName: "utilities") pod "9e0bfec2-b111-430b-a47b-f8f8a661b594" (UID: "9e0bfec2-b111-430b-a47b-f8f8a661b594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.753064 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8" (OuterVolumeSpecName: "kube-api-access-pc4m8") pod "9e0bfec2-b111-430b-a47b-f8f8a661b594" (UID: "9e0bfec2-b111-430b-a47b-f8f8a661b594"). InnerVolumeSpecName "kube-api-access-pc4m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.763400 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wwhvf"] Jan 05 21:50:09 crc kubenswrapper[5000]: W0105 21:50:09.770883 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e2f1d5c_063f_4075_8d34_8ae96f833eb9.slice/crio-97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda WatchSource:0}: Error finding container 97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda: Status 404 returned error can't find the container with id 97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.801751 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e0bfec2-b111-430b-a47b-f8f8a661b594" (UID: "9e0bfec2-b111-430b-a47b-f8f8a661b594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.842219 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc4m8\" (UniqueName: \"kubernetes.io/projected/9e0bfec2-b111-430b-a47b-f8f8a661b594-kube-api-access-pc4m8\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.842252 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:09 crc kubenswrapper[5000]: I0105 21:50:09.842261 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e0bfec2-b111-430b-a47b-f8f8a661b594-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.074314 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwhvf" event={"ID":"1e2f1d5c-063f-4075-8d34-8ae96f833eb9","Type":"ContainerStarted","Data":"c71735cb13e7fc7d9c43c4f17398961bf9a429bbd87abee69882634e05dd7601"} Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.074691 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwhvf" event={"ID":"1e2f1d5c-063f-4075-8d34-8ae96f833eb9","Type":"ContainerStarted","Data":"97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda"} Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.081553 5000 generic.go:334] "Generic (PLEG): container finished" podID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerID="62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149" exitCode=0 Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.081603 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerDied","Data":"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149"} Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.081636 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5t566" event={"ID":"9e0bfec2-b111-430b-a47b-f8f8a661b594","Type":"ContainerDied","Data":"72cd40c508a14fa587d59bf2529f9587aba953693058840728a3dd5af10a17d2"} Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.081659 5000 scope.go:117] "RemoveContainer" containerID="62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.081921 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5t566" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.097620 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wwhvf" podStartSLOduration=2.097604531 podStartE2EDuration="2.097604531s" podCreationTimestamp="2026-01-05 21:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:10.093552395 +0000 UTC m=+965.049754854" watchObservedRunningTime="2026-01-05 21:50:10.097604531 +0000 UTC m=+965.053807000" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.150711 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.158920 5000 scope.go:117] "RemoveContainer" containerID="b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.164788 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5t566"] Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.180079 5000 scope.go:117] "RemoveContainer" containerID="b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.197454 5000 scope.go:117] "RemoveContainer" containerID="62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149" Jan 05 21:50:10 crc kubenswrapper[5000]: E0105 21:50:10.197900 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149\": container with ID starting with 62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149 not found: ID does not exist" containerID="62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.197941 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149"} err="failed to get container status \"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149\": rpc error: code = NotFound desc = could not find container \"62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149\": container with ID starting with 62b7f798254976d5b217217f244779d8a8f5eca5ee4aa6fb00757d6ead576149 not found: ID does not exist" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.197970 5000 scope.go:117] "RemoveContainer" containerID="b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569" Jan 05 21:50:10 crc kubenswrapper[5000]: E0105 21:50:10.198249 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569\": container with ID starting with b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569 not found: ID does not exist" containerID="b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.198270 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569"} err="failed to get container status \"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569\": rpc error: code = NotFound desc = could not find container \"b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569\": container with ID starting with b567b9189c5490e899011f8e1854922d90cf5a7f774fa1f58fdd281f18b0d569 not found: ID does not exist" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.198286 5000 scope.go:117] "RemoveContainer" containerID="b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db" Jan 05 21:50:10 crc kubenswrapper[5000]: E0105 21:50:10.198645 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db\": container with ID starting with b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db not found: ID does not exist" containerID="b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db" Jan 05 21:50:10 crc kubenswrapper[5000]: I0105 21:50:10.198702 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db"} err="failed to get container status \"b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db\": rpc error: code = NotFound desc = could not find container \"b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db\": container with ID starting with b25f89013ee9932596047f94579fbc00d9299f19c80263bc59c2c6677d5928db not found: ID does not exist" Jan 05 21:50:11 crc kubenswrapper[5000]: I0105 21:50:11.061847 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:50:11 crc kubenswrapper[5000]: E0105 21:50:11.062158 5000 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 05 21:50:11 crc kubenswrapper[5000]: E0105 21:50:11.062313 5000 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 05 21:50:11 crc kubenswrapper[5000]: E0105 21:50:11.062368 5000 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift podName:1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f nodeName:}" failed. No retries permitted until 2026-01-05 21:50:27.062351473 +0000 UTC m=+982.018553942 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift") pod "swift-storage-0" (UID: "1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f") : configmap "swift-ring-files" not found Jan 05 21:50:11 crc kubenswrapper[5000]: I0105 21:50:11.090330 5000 generic.go:334] "Generic (PLEG): container finished" podID="1e2f1d5c-063f-4075-8d34-8ae96f833eb9" containerID="c71735cb13e7fc7d9c43c4f17398961bf9a429bbd87abee69882634e05dd7601" exitCode=0 Jan 05 21:50:11 crc kubenswrapper[5000]: I0105 21:50:11.090392 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwhvf" event={"ID":"1e2f1d5c-063f-4075-8d34-8ae96f833eb9","Type":"ContainerDied","Data":"c71735cb13e7fc7d9c43c4f17398961bf9a429bbd87abee69882634e05dd7601"} Jan 05 21:50:11 crc kubenswrapper[5000]: I0105 21:50:11.335795 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" path="/var/lib/kubelet/pods/9e0bfec2-b111-430b-a47b-f8f8a661b594/volumes" Jan 05 21:50:12 crc kubenswrapper[5000]: I0105 21:50:12.100379 5000 generic.go:334] "Generic (PLEG): container finished" podID="bcee38b5-1aa2-4d3f-8545-dfc618226422" containerID="a9ddb305516f27f2e023117bd5912b0c90d1546ab56315e838e8034a9ec489cd" exitCode=0 Jan 05 21:50:12 crc kubenswrapper[5000]: I0105 21:50:12.100471 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nkpzh" event={"ID":"bcee38b5-1aa2-4d3f-8545-dfc618226422","Type":"ContainerDied","Data":"a9ddb305516f27f2e023117bd5912b0c90d1546ab56315e838e8034a9ec489cd"} Jan 05 21:50:13 crc kubenswrapper[5000]: I0105 21:50:13.680320 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qtwd6" podUID="30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1" containerName="ovn-controller" probeResult="failure" output=< Jan 05 21:50:13 crc kubenswrapper[5000]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 05 21:50:13 crc kubenswrapper[5000]: > Jan 05 21:50:13 crc kubenswrapper[5000]: I0105 21:50:13.800411 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:50:13 crc kubenswrapper[5000]: I0105 21:50:13.811885 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cgdx9" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.018551 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qtwd6-config-957pc"] Jan 05 21:50:14 crc kubenswrapper[5000]: E0105 21:50:14.018936 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="extract-utilities" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.018966 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="extract-utilities" Jan 05 21:50:14 crc kubenswrapper[5000]: E0105 21:50:14.018985 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="extract-content" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.018993 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="extract-content" Jan 05 21:50:14 crc kubenswrapper[5000]: E0105 21:50:14.019005 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="registry-server" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.019012 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="registry-server" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.019182 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0bfec2-b111-430b-a47b-f8f8a661b594" containerName="registry-server" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.019811 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.025101 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.047510 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6-config-957pc"] Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114355 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114398 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114476 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114564 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114620 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.114655 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd47n\" (UniqueName: \"kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.131571 5000 generic.go:334] "Generic (PLEG): container finished" podID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerID="af231ca02683df2a57ad6222bd1109d5d3b597c0c7de112a1efd70dd203cc63f" exitCode=0 Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.131650 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerDied","Data":"af231ca02683df2a57ad6222bd1109d5d3b597c0c7de112a1efd70dd203cc63f"} Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.134231 5000 generic.go:334] "Generic (PLEG): container finished" podID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerID="e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a" exitCode=0 Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.135330 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerDied","Data":"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a"} Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216189 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216237 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216347 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216391 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216605 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216630 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.216679 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd47n\" (UniqueName: \"kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.217223 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.217989 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.220229 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.224967 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.235838 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd47n\" (UniqueName: \"kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n\") pod \"ovn-controller-qtwd6-config-957pc\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:14 crc kubenswrapper[5000]: I0105 21:50:14.340167 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:17 crc kubenswrapper[5000]: I0105 21:50:17.940216 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:17 crc kubenswrapper[5000]: I0105 21:50:17.988167 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.100846 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101222 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zm6\" (UniqueName: \"kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101268 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts\") pod \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101298 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmfkc\" (UniqueName: \"kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc\") pod \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\" (UID: \"1e2f1d5c-063f-4075-8d34-8ae96f833eb9\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101349 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101369 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101444 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101513 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.101546 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf\") pod \"bcee38b5-1aa2-4d3f-8545-dfc618226422\" (UID: \"bcee38b5-1aa2-4d3f-8545-dfc618226422\") " Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.102372 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.102474 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e2f1d5c-063f-4075-8d34-8ae96f833eb9" (UID: "1e2f1d5c-063f-4075-8d34-8ae96f833eb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.102602 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.107964 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc" (OuterVolumeSpecName: "kube-api-access-kmfkc") pod "1e2f1d5c-063f-4075-8d34-8ae96f833eb9" (UID: "1e2f1d5c-063f-4075-8d34-8ae96f833eb9"). InnerVolumeSpecName "kube-api-access-kmfkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.108356 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6" (OuterVolumeSpecName: "kube-api-access-f6zm6") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "kube-api-access-f6zm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.111139 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.130806 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts" (OuterVolumeSpecName: "scripts") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.133531 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.143574 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcee38b5-1aa2-4d3f-8545-dfc618226422" (UID: "bcee38b5-1aa2-4d3f-8545-dfc618226422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.169947 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6-config-957pc"] Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.170190 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwhvf" event={"ID":"1e2f1d5c-063f-4075-8d34-8ae96f833eb9","Type":"ContainerDied","Data":"97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda"} Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.170238 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d4e60096526cfa2424ebaddd579c64d6ad0650254f4dbc4c377002d717eeda" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.170309 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwhvf" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.173063 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerStarted","Data":"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be"} Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.174641 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.176243 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nkpzh" event={"ID":"bcee38b5-1aa2-4d3f-8545-dfc618226422","Type":"ContainerDied","Data":"ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11"} Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.176337 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed78df8aa84f6334f7279414149bfe507759a2dd9840b53f93605eaff5f89b11" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.177575 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nkpzh" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.179217 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerStarted","Data":"d516cc86801d2fef1efa27867fedb000b7acd6955f9965b5d9faba1cd6611430"} Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.180038 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:50:18 crc kubenswrapper[5000]: W0105 21:50:18.180474 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8d49f59_d1a4_4a84_95e6_af4fb7f29a7d.slice/crio-1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad WatchSource:0}: Error finding container 1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad: Status 404 returned error can't find the container with id 1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.202232 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.624245992 podStartE2EDuration="1m1.202210988s" podCreationTimestamp="2026-01-05 21:49:17 +0000 UTC" firstStartedPulling="2026-01-05 21:49:30.032819524 +0000 UTC m=+924.989021993" lastFinishedPulling="2026-01-05 21:49:38.61078452 +0000 UTC m=+933.566986989" observedRunningTime="2026-01-05 21:50:18.199780109 +0000 UTC m=+973.155982598" watchObservedRunningTime="2026-01-05 21:50:18.202210988 +0000 UTC m=+973.158413457" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203612 5000 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203683 5000 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bcee38b5-1aa2-4d3f-8545-dfc618226422-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203696 5000 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203707 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203721 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zm6\" (UniqueName: \"kubernetes.io/projected/bcee38b5-1aa2-4d3f-8545-dfc618226422-kube-api-access-f6zm6\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203735 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203746 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmfkc\" (UniqueName: \"kubernetes.io/projected/1e2f1d5c-063f-4075-8d34-8ae96f833eb9-kube-api-access-kmfkc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203757 5000 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bcee38b5-1aa2-4d3f-8545-dfc618226422-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.203769 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcee38b5-1aa2-4d3f-8545-dfc618226422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.232613 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.29583879 podStartE2EDuration="1m1.232553983s" podCreationTimestamp="2026-01-05 21:49:17 +0000 UTC" firstStartedPulling="2026-01-05 21:49:29.716276513 +0000 UTC m=+924.672478982" lastFinishedPulling="2026-01-05 21:49:37.652991706 +0000 UTC m=+932.609194175" observedRunningTime="2026-01-05 21:50:18.229401313 +0000 UTC m=+973.185603782" watchObservedRunningTime="2026-01-05 21:50:18.232553983 +0000 UTC m=+973.188756452" Jan 05 21:50:18 crc kubenswrapper[5000]: I0105 21:50:18.686154 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qtwd6" Jan 05 21:50:19 crc kubenswrapper[5000]: I0105 21:50:19.187668 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zg424" event={"ID":"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f","Type":"ContainerStarted","Data":"80cac4240af458a184a16d566eb066c70df0ce539a9dee0bedb5c89f8f36b75c"} Jan 05 21:50:19 crc kubenswrapper[5000]: I0105 21:50:19.190704 5000 generic.go:334] "Generic (PLEG): container finished" podID="b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" containerID="06c8e285cb3c449ce058e0732d6d7b991ed9292f3ee0e350a56fda27f5346c8a" exitCode=0 Jan 05 21:50:19 crc kubenswrapper[5000]: I0105 21:50:19.190780 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-957pc" event={"ID":"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d","Type":"ContainerDied","Data":"06c8e285cb3c449ce058e0732d6d7b991ed9292f3ee0e350a56fda27f5346c8a"} Jan 05 21:50:19 crc kubenswrapper[5000]: I0105 21:50:19.190828 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-957pc" event={"ID":"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d","Type":"ContainerStarted","Data":"1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad"} Jan 05 21:50:19 crc kubenswrapper[5000]: I0105 21:50:19.208741 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zg424" podStartSLOduration=4.181004468 podStartE2EDuration="17.20871834s" podCreationTimestamp="2026-01-05 21:50:02 +0000 UTC" firstStartedPulling="2026-01-05 21:50:04.771994286 +0000 UTC m=+959.728196755" lastFinishedPulling="2026-01-05 21:50:17.799708158 +0000 UTC m=+972.755910627" observedRunningTime="2026-01-05 21:50:19.201437002 +0000 UTC m=+974.157639471" watchObservedRunningTime="2026-01-05 21:50:19.20871834 +0000 UTC m=+974.164920809" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.457346 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wwhvf"] Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.475387 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wwhvf"] Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.584423 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742345 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742404 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742458 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742494 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742576 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742622 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd47n\" (UniqueName: \"kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742678 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.742695 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run\") pod \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\" (UID: \"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d\") " Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.743053 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run" (OuterVolumeSpecName: "var-run") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.743123 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.743130 5000 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.743317 5000 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.743463 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts" (OuterVolumeSpecName: "scripts") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.748958 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n" (OuterVolumeSpecName: "kube-api-access-sd47n") pod "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" (UID: "b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d"). InnerVolumeSpecName "kube-api-access-sd47n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.845326 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.845606 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd47n\" (UniqueName: \"kubernetes.io/projected/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-kube-api-access-sd47n\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.845617 5000 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:20 crc kubenswrapper[5000]: I0105 21:50:20.845625 5000 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044271 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:21 crc kubenswrapper[5000]: E0105 21:50:21.044669 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcee38b5-1aa2-4d3f-8545-dfc618226422" containerName="swift-ring-rebalance" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044692 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcee38b5-1aa2-4d3f-8545-dfc618226422" containerName="swift-ring-rebalance" Jan 05 21:50:21 crc kubenswrapper[5000]: E0105 21:50:21.044702 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2f1d5c-063f-4075-8d34-8ae96f833eb9" containerName="mariadb-account-create-update" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044710 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2f1d5c-063f-4075-8d34-8ae96f833eb9" containerName="mariadb-account-create-update" Jan 05 21:50:21 crc kubenswrapper[5000]: E0105 21:50:21.044724 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" containerName="ovn-config" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044732 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" containerName="ovn-config" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044943 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2f1d5c-063f-4075-8d34-8ae96f833eb9" containerName="mariadb-account-create-update" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044958 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" containerName="ovn-config" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.044972 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcee38b5-1aa2-4d3f-8545-dfc618226422" containerName="swift-ring-rebalance" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.046284 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.056036 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.149845 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nxk\" (UniqueName: \"kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.150122 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.150206 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.207218 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-957pc" event={"ID":"b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d","Type":"ContainerDied","Data":"1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad"} Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.207259 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1279177351e0f0003daac1fae415de669005eba52a911a38ba2b30fa14bf44ad" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.207504 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-957pc" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.251377 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42nxk\" (UniqueName: \"kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.251474 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.251501 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.251933 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.252012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.285846 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42nxk\" (UniqueName: \"kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk\") pod \"redhat-marketplace-kg5r7\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.332817 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2f1d5c-063f-4075-8d34-8ae96f833eb9" path="/var/lib/kubelet/pods/1e2f1d5c-063f-4075-8d34-8ae96f833eb9/volumes" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.410495 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.693951 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qtwd6-config-957pc"] Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.705178 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qtwd6-config-957pc"] Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.815516 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qtwd6-config-5lkxb"] Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.816589 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.819378 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.829605 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6-config-5lkxb"] Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.898149 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:21 crc kubenswrapper[5000]: W0105 21:50:21.903028 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc136dd3d_0202_41d3_bdd8_6cc50947b925.slice/crio-3944d5852cfda95d55eff38987365fb609573e4b60a0d17c8a7a1eaa34168d28 WatchSource:0}: Error finding container 3944d5852cfda95d55eff38987365fb609573e4b60a0d17c8a7a1eaa34168d28: Status 404 returned error can't find the container with id 3944d5852cfda95d55eff38987365fb609573e4b60a0d17c8a7a1eaa34168d28 Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.961685 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.962034 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.962057 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.962096 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.962115 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:21 crc kubenswrapper[5000]: I0105 21:50:21.962389 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzg4c\" (UniqueName: \"kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063517 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063551 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063572 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063608 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063627 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063698 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzg4c\" (UniqueName: \"kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063881 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.063972 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.064007 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.064428 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.065563 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.086535 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzg4c\" (UniqueName: \"kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c\") pod \"ovn-controller-qtwd6-config-5lkxb\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.134164 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.220126 5000 generic.go:334] "Generic (PLEG): container finished" podID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerID="642c7a8349f301f9e8659e12ecc5c254719c7ecada84450e15d175e8b1dfb777" exitCode=0 Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.220172 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerDied","Data":"642c7a8349f301f9e8659e12ecc5c254719c7ecada84450e15d175e8b1dfb777"} Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.220201 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerStarted","Data":"3944d5852cfda95d55eff38987365fb609573e4b60a0d17c8a7a1eaa34168d28"} Jan 05 21:50:22 crc kubenswrapper[5000]: I0105 21:50:22.643586 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qtwd6-config-5lkxb"] Jan 05 21:50:23 crc kubenswrapper[5000]: I0105 21:50:23.098671 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:50:23 crc kubenswrapper[5000]: I0105 21:50:23.099007 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:50:23 crc kubenswrapper[5000]: I0105 21:50:23.099047 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:50:23 crc kubenswrapper[5000]: I0105 21:50:23.099652 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:50:23 crc kubenswrapper[5000]: I0105 21:50:23.099705 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef" gracePeriod=600 Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.272372 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d" path="/var/lib/kubelet/pods/b8d49f59-d1a4-4a84-95e6-af4fb7f29a7d/volumes" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.274124 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-5lkxb" event={"ID":"2ae428c6-4122-4086-825a-07254a8a8ed3","Type":"ContainerStarted","Data":"cb3c7ff7185e232e94db8f9645835f253f7adf82c8513f6f3f14cac1922e478f"} Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.274259 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qpqcz"] Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.275395 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qpqcz"] Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.275692 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.279051 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.365234 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkprh\" (UniqueName: \"kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.365368 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.467094 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkprh\" (UniqueName: \"kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.467171 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.467836 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.490749 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkprh\" (UniqueName: \"kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh\") pod \"root-account-create-update-qpqcz\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:24 crc kubenswrapper[5000]: I0105 21:50:24.595089 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.026593 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qpqcz"] Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.243957 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef" exitCode=0 Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.244036 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef"} Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.244103 5000 scope.go:117] "RemoveContainer" containerID="cf4c8cd2c0e0c7d61f54579da2fd7b1a52efe0ef420b5d0f2c3068e03afe71bf" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.245036 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qpqcz" event={"ID":"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b","Type":"ContainerStarted","Data":"82a6683d2f9ebf9058dca58239e374fad92f58c05aa43a8beb1f9cc0957d51ae"} Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.425300 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.427825 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.441282 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.484928 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.485064 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.485131 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfmp\" (UniqueName: \"kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.587024 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfmp\" (UniqueName: \"kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.587214 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.587288 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.588098 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.588582 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.617198 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfmp\" (UniqueName: \"kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp\") pod \"community-operators-8427j\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:25 crc kubenswrapper[5000]: I0105 21:50:25.752645 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:26 crc kubenswrapper[5000]: I0105 21:50:26.255935 5000 generic.go:334] "Generic (PLEG): container finished" podID="e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" containerID="13b527218d5c31ca5dcfe8d50ac62803b8c8139beabc6c2b5cdace5c3f14ddf4" exitCode=0 Jan 05 21:50:26 crc kubenswrapper[5000]: I0105 21:50:26.256141 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qpqcz" event={"ID":"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b","Type":"ContainerDied","Data":"13b527218d5c31ca5dcfe8d50ac62803b8c8139beabc6c2b5cdace5c3f14ddf4"} Jan 05 21:50:26 crc kubenswrapper[5000]: I0105 21:50:26.259171 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-5lkxb" event={"ID":"2ae428c6-4122-4086-825a-07254a8a8ed3","Type":"ContainerStarted","Data":"fe63e94b35090b3122a7827eb7ef966253d86322302c8379c303531b66412252"} Jan 05 21:50:26 crc kubenswrapper[5000]: I0105 21:50:26.294253 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:26 crc kubenswrapper[5000]: W0105 21:50:26.296296 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ae58aa_f381_41a3_a1d3_04dec22b2ca7.slice/crio-1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72 WatchSource:0}: Error finding container 1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72: Status 404 returned error can't find the container with id 1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72 Jan 05 21:50:26 crc kubenswrapper[5000]: I0105 21:50:26.302848 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qtwd6-config-5lkxb" podStartSLOduration=5.302834881 podStartE2EDuration="5.302834881s" podCreationTimestamp="2026-01-05 21:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:26.29754056 +0000 UTC m=+981.253743029" watchObservedRunningTime="2026-01-05 21:50:26.302834881 +0000 UTC m=+981.259037350" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.111818 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.121273 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f-etc-swift\") pod \"swift-storage-0\" (UID: \"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f\") " pod="openstack/swift-storage-0" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.268071 5000 generic.go:334] "Generic (PLEG): container finished" podID="2ae428c6-4122-4086-825a-07254a8a8ed3" containerID="fe63e94b35090b3122a7827eb7ef966253d86322302c8379c303531b66412252" exitCode=0 Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.268739 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-5lkxb" event={"ID":"2ae428c6-4122-4086-825a-07254a8a8ed3","Type":"ContainerDied","Data":"fe63e94b35090b3122a7827eb7ef966253d86322302c8379c303531b66412252"} Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.271455 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a"} Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.273844 5000 generic.go:334] "Generic (PLEG): container finished" podID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerID="df5591c32be8cfbebf5f04a8281f2d2b3995308349b30d071a5e88c1b77ca279" exitCode=0 Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.273943 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerDied","Data":"df5591c32be8cfbebf5f04a8281f2d2b3995308349b30d071a5e88c1b77ca279"} Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.274144 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerStarted","Data":"1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72"} Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.276230 5000 generic.go:334] "Generic (PLEG): container finished" podID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerID="003e0e78a54379ff8a1c07a4f2d3e6218da62748f807f43c995ff24fad2a62ea" exitCode=0 Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.276393 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerDied","Data":"003e0e78a54379ff8a1c07a4f2d3e6218da62748f807f43c995ff24fad2a62ea"} Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.311899 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.717616 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.727659 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts\") pod \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.727726 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkprh\" (UniqueName: \"kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh\") pod \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\" (UID: \"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b\") " Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.729290 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" (UID: "e2c5fc17-5d5a-42f5-94e4-a265b915bf6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.734393 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.753573 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh" (OuterVolumeSpecName: "kube-api-access-jkprh") pod "e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" (UID: "e2c5fc17-5d5a-42f5-94e4-a265b915bf6b"). InnerVolumeSpecName "kube-api-access-jkprh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.835436 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkprh\" (UniqueName: \"kubernetes.io/projected/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b-kube-api-access-jkprh\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:27 crc kubenswrapper[5000]: I0105 21:50:27.940282 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.287139 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerStarted","Data":"6fbf63f7a69f12c3bbd706d2b6603fc029c2561731b5d0f6af53914a2beb5679"} Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.290487 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerStarted","Data":"76a30409e6e29ba65e1a6075eaa837d4dbdf801272bd0c2905bc68a35c520b5f"} Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.292472 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qpqcz" event={"ID":"e2c5fc17-5d5a-42f5-94e4-a265b915bf6b","Type":"ContainerDied","Data":"82a6683d2f9ebf9058dca58239e374fad92f58c05aa43a8beb1f9cc0957d51ae"} Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.292497 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a6683d2f9ebf9058dca58239e374fad92f58c05aa43a8beb1f9cc0957d51ae" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.292497 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qpqcz" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.293537 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"b008ce11e3270e131eba1bd20015bac4333388a6838d60c4d3e732e5b4be543b"} Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.343112 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kg5r7" podStartSLOduration=1.849256615 podStartE2EDuration="7.343095623s" podCreationTimestamp="2026-01-05 21:50:21 +0000 UTC" firstStartedPulling="2026-01-05 21:50:22.222261647 +0000 UTC m=+977.178464116" lastFinishedPulling="2026-01-05 21:50:27.716100655 +0000 UTC m=+982.672303124" observedRunningTime="2026-01-05 21:50:28.338340587 +0000 UTC m=+983.294543066" watchObservedRunningTime="2026-01-05 21:50:28.343095623 +0000 UTC m=+983.299298102" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.672390 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749097 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749193 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749280 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749298 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749372 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749361 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749453 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749456 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzg4c\" (UniqueName: \"kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c\") pod \"2ae428c6-4122-4086-825a-07254a8a8ed3\" (UID: \"2ae428c6-4122-4086-825a-07254a8a8ed3\") " Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.749745 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run" (OuterVolumeSpecName: "var-run") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750278 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750376 5000 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750397 5000 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750405 5000 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ae428c6-4122-4086-825a-07254a8a8ed3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750418 5000 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.750635 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts" (OuterVolumeSpecName: "scripts") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.771190 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c" (OuterVolumeSpecName: "kube-api-access-fzg4c") pod "2ae428c6-4122-4086-825a-07254a8a8ed3" (UID: "2ae428c6-4122-4086-825a-07254a8a8ed3"). InnerVolumeSpecName "kube-api-access-fzg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.845072 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.852287 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzg4c\" (UniqueName: \"kubernetes.io/projected/2ae428c6-4122-4086-825a-07254a8a8ed3-kube-api-access-fzg4c\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:28 crc kubenswrapper[5000]: I0105 21:50:28.852323 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ae428c6-4122-4086-825a-07254a8a8ed3-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.166521 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-l4nvl"] Jan 05 21:50:29 crc kubenswrapper[5000]: E0105 21:50:29.166833 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae428c6-4122-4086-825a-07254a8a8ed3" containerName="ovn-config" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.166847 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae428c6-4122-4086-825a-07254a8a8ed3" containerName="ovn-config" Jan 05 21:50:29 crc kubenswrapper[5000]: E0105 21:50:29.166857 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" containerName="mariadb-account-create-update" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.166863 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" containerName="mariadb-account-create-update" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.167069 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae428c6-4122-4086-825a-07254a8a8ed3" containerName="ovn-config" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.167083 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" containerName="mariadb-account-create-update" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.167596 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.184486 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l4nvl"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.187067 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.260373 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qq7b\" (UniqueName: \"kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.260451 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.300124 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-s2pv5"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.302682 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.304646 5000 generic.go:334] "Generic (PLEG): container finished" podID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerID="6fbf63f7a69f12c3bbd706d2b6603fc029c2561731b5d0f6af53914a2beb5679" exitCode=0 Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.304707 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerDied","Data":"6fbf63f7a69f12c3bbd706d2b6603fc029c2561731b5d0f6af53914a2beb5679"} Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.306353 5000 generic.go:334] "Generic (PLEG): container finished" podID="3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" containerID="80cac4240af458a184a16d566eb066c70df0ce539a9dee0bedb5c89f8f36b75c" exitCode=0 Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.306387 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zg424" event={"ID":"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f","Type":"ContainerDied","Data":"80cac4240af458a184a16d566eb066c70df0ce539a9dee0bedb5c89f8f36b75c"} Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.315019 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qtwd6-config-5lkxb" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.320532 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qtwd6-config-5lkxb" event={"ID":"2ae428c6-4122-4086-825a-07254a8a8ed3","Type":"ContainerDied","Data":"cb3c7ff7185e232e94db8f9645835f253f7adf82c8513f6f3f14cac1922e478f"} Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.320593 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb3c7ff7185e232e94db8f9645835f253f7adf82c8513f6f3f14cac1922e478f" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.320617 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s2pv5"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.362343 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhrb\" (UniqueName: \"kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.362404 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qq7b\" (UniqueName: \"kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.362474 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.362496 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.366580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.392649 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qq7b\" (UniqueName: \"kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b\") pod \"cinder-db-create-l4nvl\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.402820 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fa44-account-create-update-l84sv"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.403824 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.405661 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.408979 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa44-account-create-update-l84sv"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.463105 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.463977 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhrb\" (UniqueName: \"kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.464154 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh9n\" (UniqueName: \"kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.464302 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.465359 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.468598 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6g5ww"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.469835 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.475388 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.475803 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.476152 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.483225 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcmgb" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.485127 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6g5ww"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.496017 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dba2-account-create-update-pg6tz"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.497137 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.502002 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.503710 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dba2-account-create-update-pg6tz"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.504665 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.508626 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhrb\" (UniqueName: \"kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb\") pod \"barbican-db-create-s2pv5\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.565849 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498mv\" (UniqueName: \"kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.565923 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.565950 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsh9n\" (UniqueName: \"kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.565997 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gh4\" (UniqueName: \"kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.566019 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.566038 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.566059 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.566642 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.573853 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kq56w"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.574798 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.587958 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsh9n\" (UniqueName: \"kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n\") pod \"barbican-fa44-account-create-update-l84sv\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.588851 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kq56w"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.631193 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671610 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gh4\" (UniqueName: \"kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671653 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671679 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671703 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671784 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671822 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498mv\" (UniqueName: \"kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.671872 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9mfm\" (UniqueName: \"kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.672672 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.676316 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.676391 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.714816 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gh4\" (UniqueName: \"kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4\") pod \"cinder-dba2-account-create-update-pg6tz\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.722341 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498mv\" (UniqueName: \"kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv\") pod \"keystone-db-sync-6g5ww\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.732184 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5515-account-create-update-7wkj4"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.740810 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5515-account-create-update-7wkj4"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.742715 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.747553 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.747669 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.778134 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.778202 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.778341 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72xp\" (UniqueName: \"kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.778383 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9mfm\" (UniqueName: \"kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.781580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.802841 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.803970 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9mfm\" (UniqueName: \"kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm\") pod \"neutron-db-create-kq56w\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.811983 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qtwd6-config-5lkxb"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.834526 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qtwd6-config-5lkxb"] Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.849696 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.880052 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.880127 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w72xp\" (UniqueName: \"kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.882042 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:29 crc kubenswrapper[5000]: I0105 21:50:29.901617 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w72xp\" (UniqueName: \"kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp\") pod \"neutron-5515-account-create-update-7wkj4\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.035417 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.084918 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.131641 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-l4nvl"] Jan 05 21:50:30 crc kubenswrapper[5000]: W0105 21:50:30.150013 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd9b04a_feba_4af2_a02f_be6af11c059c.slice/crio-7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46 WatchSource:0}: Error finding container 7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46: Status 404 returned error can't find the container with id 7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46 Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.298045 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s2pv5"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.341870 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l4nvl" event={"ID":"1fd9b04a-feba-4af2-a02f-be6af11c059c","Type":"ContainerStarted","Data":"7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46"} Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.363780 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"b6af4becd37fb69a45ac1cfed2aa1b9e7aa5c2f673e9eb329496645741ce2bc1"} Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.363824 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"272eec3371d024f32b46ffc2e8638ee8f5e01a43ccd9c1e2fda66eb44bf3bde9"} Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.431060 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa44-account-create-update-l84sv"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.442729 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6g5ww"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.501367 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qpqcz"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.516919 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qpqcz"] Jan 05 21:50:30 crc kubenswrapper[5000]: W0105 21:50:30.532063 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb03a78cf_7207_491b_bdf2_dc30e3f70480.slice/crio-ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472 WatchSource:0}: Error finding container ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472: Status 404 returned error can't find the container with id ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472 Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.532735 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dba2-account-create-update-pg6tz"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.696764 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kq56w"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.835142 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5515-account-create-update-7wkj4"] Jan 05 21:50:30 crc kubenswrapper[5000]: I0105 21:50:30.962841 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zg424" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.019398 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle\") pod \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.019458 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data\") pod \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.019573 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data\") pod \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.037849 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" (UID: "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.114972 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" (UID: "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.124347 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9x59\" (UniqueName: \"kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59\") pod \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\" (UID: \"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f\") " Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.124671 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.124688 5000 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.131316 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59" (OuterVolumeSpecName: "kube-api-access-q9x59") pod "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" (UID: "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f"). InnerVolumeSpecName "kube-api-access-q9x59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.170180 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data" (OuterVolumeSpecName: "config-data") pod "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" (UID: "3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.225806 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.225834 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9x59\" (UniqueName: \"kubernetes.io/projected/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f-kube-api-access-q9x59\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.334358 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae428c6-4122-4086-825a-07254a8a8ed3" path="/var/lib/kubelet/pods/2ae428c6-4122-4086-825a-07254a8a8ed3/volumes" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.335004 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c5fc17-5d5a-42f5-94e4-a265b915bf6b" path="/var/lib/kubelet/pods/e2c5fc17-5d5a-42f5-94e4-a265b915bf6b/volumes" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.386600 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kq56w" event={"ID":"37652792-2853-4edf-a1e4-c0f51291b3c4","Type":"ContainerStarted","Data":"c0b86e428148a8829ec674d8d1c1348f9988c252f570b4589d9023df9df696ab"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.386639 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kq56w" event={"ID":"37652792-2853-4edf-a1e4-c0f51291b3c4","Type":"ContainerStarted","Data":"5f229c013f27bc8f738bb7e72d5f0162920896fc5d6641786caf8d200c82dc9f"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.402595 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6g5ww" event={"ID":"8e46dcd5-83ef-4a7b-a07b-a850071a330c","Type":"ContainerStarted","Data":"a7354c3240c98faa5c10d715ad54e89d7ad1199519a236e0e93cebb5077f1672"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.411149 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.411546 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.418496 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zg424" event={"ID":"3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f","Type":"ContainerDied","Data":"059d7d568386eb4a2ff00d46968bd5639c52329dd272c188cf3b904249d9084c"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.418531 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059d7d568386eb4a2ff00d46968bd5639c52329dd272c188cf3b904249d9084c" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.418586 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zg424" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.421720 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-kq56w" podStartSLOduration=2.421704234 podStartE2EDuration="2.421704234s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:31.412136951 +0000 UTC m=+986.368339420" watchObservedRunningTime="2026-01-05 21:50:31.421704234 +0000 UTC m=+986.377906703" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.429252 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s2pv5" event={"ID":"56ff8f19-5fd1-41f3-b417-1d32146bad28","Type":"ContainerStarted","Data":"17146eaf4414459be821baa03ee865f4422c3c9fd02929bf18a8fd7cf6b5e1b3"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.429300 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s2pv5" event={"ID":"56ff8f19-5fd1-41f3-b417-1d32146bad28","Type":"ContainerStarted","Data":"d07cccba9b1b01e9620715c2d06bd890c955b94a23f5e356f154e5ca6170e55a"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.453754 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dba2-account-create-update-pg6tz" event={"ID":"b03a78cf-7207-491b-bdf2-dc30e3f70480","Type":"ContainerStarted","Data":"25265583ff414d808800f39fda3565ffaa38570825b4c8f313cb7c2cbdb3a374"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.453803 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dba2-account-create-update-pg6tz" event={"ID":"b03a78cf-7207-491b-bdf2-dc30e3f70480","Type":"ContainerStarted","Data":"ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.459120 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-s2pv5" podStartSLOduration=2.459100969 podStartE2EDuration="2.459100969s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:31.452207233 +0000 UTC m=+986.408409702" watchObservedRunningTime="2026-01-05 21:50:31.459100969 +0000 UTC m=+986.415303438" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.487570 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerStarted","Data":"13d7266a89d384890b7542fe2dfe9a69631446ba60c49be4c8488734f7c2bf46"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.487820 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-dba2-account-create-update-pg6tz" podStartSLOduration=2.487802547 podStartE2EDuration="2.487802547s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:31.481979111 +0000 UTC m=+986.438181580" watchObservedRunningTime="2026-01-05 21:50:31.487802547 +0000 UTC m=+986.444005016" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.514336 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"2a3feee68089ad6fc7f4fd33192050ab9cafe1e26c0c3259012cb2635f3f756d"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.514375 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"99ecb10859d303877cfc8c93bea97353692bfd0a0d3c19dd28156dadb6992c85"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.519938 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa44-account-create-update-l84sv" event={"ID":"ac82245a-da6c-4a0a-98a2-404935fbfb64","Type":"ContainerStarted","Data":"bfbce37f38c34c070eab3490287310fb893446752b76ae0f8ae5033d19bb4284"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.519998 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa44-account-create-update-l84sv" event={"ID":"ac82245a-da6c-4a0a-98a2-404935fbfb64","Type":"ContainerStarted","Data":"055dbd233bd06369ee824cbb4f13ce583c2a0a2ea19272a12bc5b8673493928e"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.526461 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8427j" podStartSLOduration=3.082242169 podStartE2EDuration="6.526441398s" podCreationTimestamp="2026-01-05 21:50:25 +0000 UTC" firstStartedPulling="2026-01-05 21:50:27.276333143 +0000 UTC m=+982.232535612" lastFinishedPulling="2026-01-05 21:50:30.720532372 +0000 UTC m=+985.676734841" observedRunningTime="2026-01-05 21:50:31.518734089 +0000 UTC m=+986.474936558" watchObservedRunningTime="2026-01-05 21:50:31.526441398 +0000 UTC m=+986.482643857" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.540349 5000 generic.go:334] "Generic (PLEG): container finished" podID="1fd9b04a-feba-4af2-a02f-be6af11c059c" containerID="11891e0e0afb91d8d6fec56e174ac0ea5bd0799295dae78e249ba29b9afb016b" exitCode=0 Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.540424 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l4nvl" event={"ID":"1fd9b04a-feba-4af2-a02f-be6af11c059c","Type":"ContainerDied","Data":"11891e0e0afb91d8d6fec56e174ac0ea5bd0799295dae78e249ba29b9afb016b"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.548827 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5515-account-create-update-7wkj4" event={"ID":"6fa6ddda-7b19-4d81-b114-b887e43ce7e2","Type":"ContainerStarted","Data":"868be418d5303816019d5ae684f9bcb8a9e2b0fa98e8d4d8a39046a000e97481"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.548882 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5515-account-create-update-7wkj4" event={"ID":"6fa6ddda-7b19-4d81-b114-b887e43ce7e2","Type":"ContainerStarted","Data":"90e8fce0a394289fbf02915d20ce8b56d35ab93bf98773e4f8b5f7e83cbd4896"} Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.557782 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-fa44-account-create-update-l84sv" podStartSLOduration=2.557761561 podStartE2EDuration="2.557761561s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:31.546369106 +0000 UTC m=+986.502571575" watchObservedRunningTime="2026-01-05 21:50:31.557761561 +0000 UTC m=+986.513964030" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.560231 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.629430 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5515-account-create-update-7wkj4" podStartSLOduration=2.629415243 podStartE2EDuration="2.629415243s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:31.627335184 +0000 UTC m=+986.583537723" watchObservedRunningTime="2026-01-05 21:50:31.629415243 +0000 UTC m=+986.585617712" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.872852 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:31 crc kubenswrapper[5000]: E0105 21:50:31.873238 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" containerName="glance-db-sync" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.873253 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" containerName="glance-db-sync" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.873400 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" containerName="glance-db-sync" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.874194 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.900523 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.940178 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.940241 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.940267 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.940293 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:31 crc kubenswrapper[5000]: I0105 21:50:31.940327 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkn8n\" (UniqueName: \"kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.041419 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkn8n\" (UniqueName: \"kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.041503 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.041542 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.041575 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.041598 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.042393 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.043150 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.043812 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.044291 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.089777 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkn8n\" (UniqueName: \"kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n\") pod \"dnsmasq-dns-5b946c75cc-pvhrq\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.311228 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.561532 5000 generic.go:334] "Generic (PLEG): container finished" podID="b03a78cf-7207-491b-bdf2-dc30e3f70480" containerID="25265583ff414d808800f39fda3565ffaa38570825b4c8f313cb7c2cbdb3a374" exitCode=0 Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.561598 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dba2-account-create-update-pg6tz" event={"ID":"b03a78cf-7207-491b-bdf2-dc30e3f70480","Type":"ContainerDied","Data":"25265583ff414d808800f39fda3565ffaa38570825b4c8f313cb7c2cbdb3a374"} Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.566830 5000 generic.go:334] "Generic (PLEG): container finished" podID="37652792-2853-4edf-a1e4-c0f51291b3c4" containerID="c0b86e428148a8829ec674d8d1c1348f9988c252f570b4589d9023df9df696ab" exitCode=0 Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.566916 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kq56w" event={"ID":"37652792-2853-4edf-a1e4-c0f51291b3c4","Type":"ContainerDied","Data":"c0b86e428148a8829ec674d8d1c1348f9988c252f570b4589d9023df9df696ab"} Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.568840 5000 generic.go:334] "Generic (PLEG): container finished" podID="ac82245a-da6c-4a0a-98a2-404935fbfb64" containerID="bfbce37f38c34c070eab3490287310fb893446752b76ae0f8ae5033d19bb4284" exitCode=0 Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.568974 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa44-account-create-update-l84sv" event={"ID":"ac82245a-da6c-4a0a-98a2-404935fbfb64","Type":"ContainerDied","Data":"bfbce37f38c34c070eab3490287310fb893446752b76ae0f8ae5033d19bb4284"} Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.570692 5000 generic.go:334] "Generic (PLEG): container finished" podID="56ff8f19-5fd1-41f3-b417-1d32146bad28" containerID="17146eaf4414459be821baa03ee865f4422c3c9fd02929bf18a8fd7cf6b5e1b3" exitCode=0 Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.571648 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s2pv5" event={"ID":"56ff8f19-5fd1-41f3-b417-1d32146bad28","Type":"ContainerDied","Data":"17146eaf4414459be821baa03ee865f4422c3c9fd02929bf18a8fd7cf6b5e1b3"} Jan 05 21:50:32 crc kubenswrapper[5000]: I0105 21:50:32.632679 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.144912 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.233421 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.270129 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qq7b\" (UniqueName: \"kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b\") pod \"1fd9b04a-feba-4af2-a02f-be6af11c059c\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.270187 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts\") pod \"1fd9b04a-feba-4af2-a02f-be6af11c059c\" (UID: \"1fd9b04a-feba-4af2-a02f-be6af11c059c\") " Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.271162 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fd9b04a-feba-4af2-a02f-be6af11c059c" (UID: "1fd9b04a-feba-4af2-a02f-be6af11c059c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.274987 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b" (OuterVolumeSpecName: "kube-api-access-7qq7b") pod "1fd9b04a-feba-4af2-a02f-be6af11c059c" (UID: "1fd9b04a-feba-4af2-a02f-be6af11c059c"). InnerVolumeSpecName "kube-api-access-7qq7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.374669 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd9b04a-feba-4af2-a02f-be6af11c059c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.374707 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qq7b\" (UniqueName: \"kubernetes.io/projected/1fd9b04a-feba-4af2-a02f-be6af11c059c-kube-api-access-7qq7b\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.613518 5000 generic.go:334] "Generic (PLEG): container finished" podID="6fa6ddda-7b19-4d81-b114-b887e43ce7e2" containerID="868be418d5303816019d5ae684f9bcb8a9e2b0fa98e8d4d8a39046a000e97481" exitCode=0 Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.613614 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5515-account-create-update-7wkj4" event={"ID":"6fa6ddda-7b19-4d81-b114-b887e43ce7e2","Type":"ContainerDied","Data":"868be418d5303816019d5ae684f9bcb8a9e2b0fa98e8d4d8a39046a000e97481"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.619459 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" event={"ID":"a889aad2-1507-4494-ad0b-16e298f1cd4d","Type":"ContainerStarted","Data":"cad2eb2a11e8114cdf6da9797f7d49e42362dec0caaa6707cfc0bae85767fba4"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.625993 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.678492 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"14067638fbdf73556286e00b4d8c5805f65da948147c28ceb0d47bc067207ef7"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.678860 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"bf8f88a52ba1f45ed450cf0d767f713c0915fe282b909b43c41a2375e385eb4c"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.678876 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"5011a408d5ac6c3a2a15ec3fd814480c51470465cce8858f1d2ca2f5f2d7e42d"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.710373 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-l4nvl" Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.710757 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-l4nvl" event={"ID":"1fd9b04a-feba-4af2-a02f-be6af11c059c","Type":"ContainerDied","Data":"7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46"} Jan 05 21:50:33 crc kubenswrapper[5000]: I0105 21:50:33.710776 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9b5e7363593439b3678515aefa74061dead6ef8a50ca3daeed0bb452fa0b46" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.327903 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.420686 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsh9n\" (UniqueName: \"kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n\") pod \"ac82245a-da6c-4a0a-98a2-404935fbfb64\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.422475 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts\") pod \"ac82245a-da6c-4a0a-98a2-404935fbfb64\" (UID: \"ac82245a-da6c-4a0a-98a2-404935fbfb64\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.424211 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac82245a-da6c-4a0a-98a2-404935fbfb64" (UID: "ac82245a-da6c-4a0a-98a2-404935fbfb64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.427281 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n" (OuterVolumeSpecName: "kube-api-access-bsh9n") pod "ac82245a-da6c-4a0a-98a2-404935fbfb64" (UID: "ac82245a-da6c-4a0a-98a2-404935fbfb64"). InnerVolumeSpecName "kube-api-access-bsh9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.504209 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.507400 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.513924 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.524287 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts\") pod \"56ff8f19-5fd1-41f3-b417-1d32146bad28\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.524491 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bhrb\" (UniqueName: \"kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb\") pod \"56ff8f19-5fd1-41f3-b417-1d32146bad28\" (UID: \"56ff8f19-5fd1-41f3-b417-1d32146bad28\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.524800 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac82245a-da6c-4a0a-98a2-404935fbfb64-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.524820 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsh9n\" (UniqueName: \"kubernetes.io/projected/ac82245a-da6c-4a0a-98a2-404935fbfb64-kube-api-access-bsh9n\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.525162 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56ff8f19-5fd1-41f3-b417-1d32146bad28" (UID: "56ff8f19-5fd1-41f3-b417-1d32146bad28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.527656 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb" (OuterVolumeSpecName: "kube-api-access-4bhrb") pod "56ff8f19-5fd1-41f3-b417-1d32146bad28" (UID: "56ff8f19-5fd1-41f3-b417-1d32146bad28"). InnerVolumeSpecName "kube-api-access-4bhrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626081 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts\") pod \"b03a78cf-7207-491b-bdf2-dc30e3f70480\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626166 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9mfm\" (UniqueName: \"kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm\") pod \"37652792-2853-4edf-a1e4-c0f51291b3c4\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626215 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts\") pod \"37652792-2853-4edf-a1e4-c0f51291b3c4\" (UID: \"37652792-2853-4edf-a1e4-c0f51291b3c4\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626269 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2gh4\" (UniqueName: \"kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4\") pod \"b03a78cf-7207-491b-bdf2-dc30e3f70480\" (UID: \"b03a78cf-7207-491b-bdf2-dc30e3f70480\") " Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626494 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56ff8f19-5fd1-41f3-b417-1d32146bad28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626505 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bhrb\" (UniqueName: \"kubernetes.io/projected/56ff8f19-5fd1-41f3-b417-1d32146bad28-kube-api-access-4bhrb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626561 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b03a78cf-7207-491b-bdf2-dc30e3f70480" (UID: "b03a78cf-7207-491b-bdf2-dc30e3f70480"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.626960 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37652792-2853-4edf-a1e4-c0f51291b3c4" (UID: "37652792-2853-4edf-a1e4-c0f51291b3c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.629860 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4" (OuterVolumeSpecName: "kube-api-access-p2gh4") pod "b03a78cf-7207-491b-bdf2-dc30e3f70480" (UID: "b03a78cf-7207-491b-bdf2-dc30e3f70480"). InnerVolumeSpecName "kube-api-access-p2gh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.630054 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm" (OuterVolumeSpecName: "kube-api-access-c9mfm") pod "37652792-2853-4edf-a1e4-c0f51291b3c4" (UID: "37652792-2853-4edf-a1e4-c0f51291b3c4"). InnerVolumeSpecName "kube-api-access-c9mfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.724594 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"73fec236cfdb73cb1da8c55f736eae54fb7f975f9505e2acc7fbc742976a2178"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.726086 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s2pv5" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.726082 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s2pv5" event={"ID":"56ff8f19-5fd1-41f3-b417-1d32146bad28","Type":"ContainerDied","Data":"d07cccba9b1b01e9620715c2d06bd890c955b94a23f5e356f154e5ca6170e55a"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.726133 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d07cccba9b1b01e9620715c2d06bd890c955b94a23f5e356f154e5ca6170e55a" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727402 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03a78cf-7207-491b-bdf2-dc30e3f70480-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727421 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9mfm\" (UniqueName: \"kubernetes.io/projected/37652792-2853-4edf-a1e4-c0f51291b3c4-kube-api-access-c9mfm\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727433 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37652792-2853-4edf-a1e4-c0f51291b3c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727441 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2gh4\" (UniqueName: \"kubernetes.io/projected/b03a78cf-7207-491b-bdf2-dc30e3f70480-kube-api-access-p2gh4\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727613 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dba2-account-create-update-pg6tz" event={"ID":"b03a78cf-7207-491b-bdf2-dc30e3f70480","Type":"ContainerDied","Data":"ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727645 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4e196ffb5a36eba642fef4952ce7705e929b1f99e6a8ce0664390313a20472" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.727700 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dba2-account-create-update-pg6tz" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.729550 5000 generic.go:334] "Generic (PLEG): container finished" podID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerID="849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347" exitCode=0 Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.729634 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" event={"ID":"a889aad2-1507-4494-ad0b-16e298f1cd4d","Type":"ContainerDied","Data":"849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.731325 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kq56w" event={"ID":"37652792-2853-4edf-a1e4-c0f51291b3c4","Type":"ContainerDied","Data":"5f229c013f27bc8f738bb7e72d5f0162920896fc5d6641786caf8d200c82dc9f"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.731346 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f229c013f27bc8f738bb7e72d5f0162920896fc5d6641786caf8d200c82dc9f" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.731385 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kq56w" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.733527 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kg5r7" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="registry-server" containerID="cri-o://76a30409e6e29ba65e1a6075eaa837d4dbdf801272bd0c2905bc68a35c520b5f" gracePeriod=2 Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.733632 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa44-account-create-update-l84sv" Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.734093 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa44-account-create-update-l84sv" event={"ID":"ac82245a-da6c-4a0a-98a2-404935fbfb64","Type":"ContainerDied","Data":"055dbd233bd06369ee824cbb4f13ce583c2a0a2ea19272a12bc5b8673493928e"} Jan 05 21:50:34 crc kubenswrapper[5000]: I0105 21:50:34.734218 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055dbd233bd06369ee824cbb4f13ce583c2a0a2ea19272a12bc5b8673493928e" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.501582 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qrvsf"] Jan 05 21:50:35 crc kubenswrapper[5000]: E0105 21:50:35.502277 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff8f19-5fd1-41f3-b417-1d32146bad28" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502296 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff8f19-5fd1-41f3-b417-1d32146bad28" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: E0105 21:50:35.502307 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37652792-2853-4edf-a1e4-c0f51291b3c4" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502314 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="37652792-2853-4edf-a1e4-c0f51291b3c4" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: E0105 21:50:35.502331 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac82245a-da6c-4a0a-98a2-404935fbfb64" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502338 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac82245a-da6c-4a0a-98a2-404935fbfb64" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: E0105 21:50:35.502360 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03a78cf-7207-491b-bdf2-dc30e3f70480" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502367 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03a78cf-7207-491b-bdf2-dc30e3f70480" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: E0105 21:50:35.502381 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd9b04a-feba-4af2-a02f-be6af11c059c" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502388 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd9b04a-feba-4af2-a02f-be6af11c059c" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502569 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff8f19-5fd1-41f3-b417-1d32146bad28" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502592 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03a78cf-7207-491b-bdf2-dc30e3f70480" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502606 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac82245a-da6c-4a0a-98a2-404935fbfb64" containerName="mariadb-account-create-update" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502618 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd9b04a-feba-4af2-a02f-be6af11c059c" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.502629 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="37652792-2853-4edf-a1e4-c0f51291b3c4" containerName="mariadb-database-create" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.503275 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.507862 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.544242 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8hj\" (UniqueName: \"kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.544610 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.547731 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qrvsf"] Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.646077 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.646157 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8hj\" (UniqueName: \"kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.647156 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.670717 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8hj\" (UniqueName: \"kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj\") pod \"root-account-create-update-qrvsf\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.744028 5000 generic.go:334] "Generic (PLEG): container finished" podID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerID="76a30409e6e29ba65e1a6075eaa837d4dbdf801272bd0c2905bc68a35c520b5f" exitCode=0 Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.744094 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerDied","Data":"76a30409e6e29ba65e1a6075eaa837d4dbdf801272bd0c2905bc68a35c520b5f"} Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.754130 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.755007 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.806656 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:35 crc kubenswrapper[5000]: I0105 21:50:35.826651 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:36 crc kubenswrapper[5000]: I0105 21:50:36.794099 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:37 crc kubenswrapper[5000]: I0105 21:50:37.215740 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:38 crc kubenswrapper[5000]: I0105 21:50:38.777174 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8427j" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="registry-server" containerID="cri-o://13d7266a89d384890b7542fe2dfe9a69631446ba60c49be4c8488734f7c2bf46" gracePeriod=2 Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.458263 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.477429 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.517982 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts\") pod \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518029 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w72xp\" (UniqueName: \"kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp\") pod \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\" (UID: \"6fa6ddda-7b19-4d81-b114-b887e43ce7e2\") " Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518056 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content\") pod \"c136dd3d-0202-41d3-bdd8-6cc50947b925\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518188 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities\") pod \"c136dd3d-0202-41d3-bdd8-6cc50947b925\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518692 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fa6ddda-7b19-4d81-b114-b887e43ce7e2" (UID: "6fa6ddda-7b19-4d81-b114-b887e43ce7e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518785 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities" (OuterVolumeSpecName: "utilities") pod "c136dd3d-0202-41d3-bdd8-6cc50947b925" (UID: "c136dd3d-0202-41d3-bdd8-6cc50947b925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.518886 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42nxk\" (UniqueName: \"kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk\") pod \"c136dd3d-0202-41d3-bdd8-6cc50947b925\" (UID: \"c136dd3d-0202-41d3-bdd8-6cc50947b925\") " Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.519567 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.519583 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.533725 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk" (OuterVolumeSpecName: "kube-api-access-42nxk") pod "c136dd3d-0202-41d3-bdd8-6cc50947b925" (UID: "c136dd3d-0202-41d3-bdd8-6cc50947b925"). InnerVolumeSpecName "kube-api-access-42nxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.536056 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp" (OuterVolumeSpecName: "kube-api-access-w72xp") pod "6fa6ddda-7b19-4d81-b114-b887e43ce7e2" (UID: "6fa6ddda-7b19-4d81-b114-b887e43ce7e2"). InnerVolumeSpecName "kube-api-access-w72xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.548753 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c136dd3d-0202-41d3-bdd8-6cc50947b925" (UID: "c136dd3d-0202-41d3-bdd8-6cc50947b925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.621668 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w72xp\" (UniqueName: \"kubernetes.io/projected/6fa6ddda-7b19-4d81-b114-b887e43ce7e2-kube-api-access-w72xp\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.622021 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c136dd3d-0202-41d3-bdd8-6cc50947b925-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.622031 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42nxk\" (UniqueName: \"kubernetes.io/projected/c136dd3d-0202-41d3-bdd8-6cc50947b925-kube-api-access-42nxk\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.805561 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" event={"ID":"a889aad2-1507-4494-ad0b-16e298f1cd4d","Type":"ContainerStarted","Data":"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be"} Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.805775 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.811221 5000 generic.go:334] "Generic (PLEG): container finished" podID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerID="13d7266a89d384890b7542fe2dfe9a69631446ba60c49be4c8488734f7c2bf46" exitCode=0 Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.811284 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerDied","Data":"13d7266a89d384890b7542fe2dfe9a69631446ba60c49be4c8488734f7c2bf46"} Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.811308 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8427j" event={"ID":"41ae58aa-f381-41a3-a1d3-04dec22b2ca7","Type":"ContainerDied","Data":"1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72"} Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.811321 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f70e52ba4158d711d98fc72fc41207b3246a018bf82e3bfbb5e875d83b49a72" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.814398 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg5r7" event={"ID":"c136dd3d-0202-41d3-bdd8-6cc50947b925","Type":"ContainerDied","Data":"3944d5852cfda95d55eff38987365fb609573e4b60a0d17c8a7a1eaa34168d28"} Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.814410 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg5r7" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.814450 5000 scope.go:117] "RemoveContainer" containerID="76a30409e6e29ba65e1a6075eaa837d4dbdf801272bd0c2905bc68a35c520b5f" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.822506 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5515-account-create-update-7wkj4" event={"ID":"6fa6ddda-7b19-4d81-b114-b887e43ce7e2","Type":"ContainerDied","Data":"90e8fce0a394289fbf02915d20ce8b56d35ab93bf98773e4f8b5f7e83cbd4896"} Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.822546 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e8fce0a394289fbf02915d20ce8b56d35ab93bf98773e4f8b5f7e83cbd4896" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.822615 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5515-account-create-update-7wkj4" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.911637 5000 scope.go:117] "RemoveContainer" containerID="003e0e78a54379ff8a1c07a4f2d3e6218da62748f807f43c995ff24fad2a62ea" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.933610 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.967436 5000 scope.go:117] "RemoveContainer" containerID="642c7a8349f301f9e8659e12ecc5c254719c7ecada84450e15d175e8b1dfb777" Jan 05 21:50:39 crc kubenswrapper[5000]: I0105 21:50:39.989408 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" podStartSLOduration=8.989384397 podStartE2EDuration="8.989384397s" podCreationTimestamp="2026-01-05 21:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:39.824818818 +0000 UTC m=+994.781021297" watchObservedRunningTime="2026-01-05 21:50:39.989384397 +0000 UTC m=+994.945586866" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.027624 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfmp\" (UniqueName: \"kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp\") pod \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.027691 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities\") pod \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.027771 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content\") pod \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\" (UID: \"41ae58aa-f381-41a3-a1d3-04dec22b2ca7\") " Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.036497 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities" (OuterVolumeSpecName: "utilities") pod "41ae58aa-f381-41a3-a1d3-04dec22b2ca7" (UID: "41ae58aa-f381-41a3-a1d3-04dec22b2ca7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.060048 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp" (OuterVolumeSpecName: "kube-api-access-bsfmp") pod "41ae58aa-f381-41a3-a1d3-04dec22b2ca7" (UID: "41ae58aa-f381-41a3-a1d3-04dec22b2ca7"). InnerVolumeSpecName "kube-api-access-bsfmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.066961 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.076766 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg5r7"] Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.122391 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41ae58aa-f381-41a3-a1d3-04dec22b2ca7" (UID: "41ae58aa-f381-41a3-a1d3-04dec22b2ca7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.133803 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.133836 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.133845 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfmp\" (UniqueName: \"kubernetes.io/projected/41ae58aa-f381-41a3-a1d3-04dec22b2ca7-kube-api-access-bsfmp\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.390666 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qrvsf"] Jan 05 21:50:40 crc kubenswrapper[5000]: W0105 21:50:40.394101 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c462c92_9ae1_4351_bd0b_e97d442e2b6a.slice/crio-156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6 WatchSource:0}: Error finding container 156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6: Status 404 returned error can't find the container with id 156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6 Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.848187 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6g5ww" event={"ID":"8e46dcd5-83ef-4a7b-a07b-a850071a330c","Type":"ContainerStarted","Data":"030df051cd17ef123b438baead653693d9fc0bcb2110e627dd98409882142999"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.850612 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrvsf" event={"ID":"4c462c92-9ae1-4351-bd0b-e97d442e2b6a","Type":"ContainerStarted","Data":"db3604e0f238a934124a5f33778cd5fd48a0f7de3d0e002a1c744826947f2463"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.850637 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrvsf" event={"ID":"4c462c92-9ae1-4351-bd0b-e97d442e2b6a","Type":"ContainerStarted","Data":"156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.873829 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6g5ww" podStartSLOduration=2.448868997 podStartE2EDuration="11.87380769s" podCreationTimestamp="2026-01-05 21:50:29 +0000 UTC" firstStartedPulling="2026-01-05 21:50:30.48674902 +0000 UTC m=+985.442951489" lastFinishedPulling="2026-01-05 21:50:39.911687713 +0000 UTC m=+994.867890182" observedRunningTime="2026-01-05 21:50:40.863995561 +0000 UTC m=+995.820198030" watchObservedRunningTime="2026-01-05 21:50:40.87380769 +0000 UTC m=+995.830010159" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.882146 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"ccc403d1d223d816c5a209325eb5eafedfe69c0013d898cc072ea514857c5567"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.882190 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"e0a18c4321233398afd09fd98feec3f922b6aef45a0b32169aef37056613e709"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.882200 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"0e7fc6e9bae1ba99ef311b65ca34c6d9f0e8f41466f2d290eefbeddd7cef386d"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.882208 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"26e0c85c7b90dffebe454bc82688d3ebe5907738cfcc7201e893bdab1a57d2d1"} Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.883728 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qrvsf" podStartSLOduration=5.883692302 podStartE2EDuration="5.883692302s" podCreationTimestamp="2026-01-05 21:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:40.881633633 +0000 UTC m=+995.837836102" watchObservedRunningTime="2026-01-05 21:50:40.883692302 +0000 UTC m=+995.839894771" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.887020 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8427j" Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.965963 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:40 crc kubenswrapper[5000]: I0105 21:50:40.983780 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8427j"] Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.334046 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" path="/var/lib/kubelet/pods/41ae58aa-f381-41a3-a1d3-04dec22b2ca7/volumes" Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.343601 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" path="/var/lib/kubelet/pods/c136dd3d-0202-41d3-bdd8-6cc50947b925/volumes" Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.899523 5000 generic.go:334] "Generic (PLEG): container finished" podID="4c462c92-9ae1-4351-bd0b-e97d442e2b6a" containerID="db3604e0f238a934124a5f33778cd5fd48a0f7de3d0e002a1c744826947f2463" exitCode=0 Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.899586 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrvsf" event={"ID":"4c462c92-9ae1-4351-bd0b-e97d442e2b6a","Type":"ContainerDied","Data":"db3604e0f238a934124a5f33778cd5fd48a0f7de3d0e002a1c744826947f2463"} Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.906983 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"aa5561a1d9c1e76820a796d42fca6500db492fd745b4da36e25bfaf24567b4b7"} Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.907045 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"7cb05c673a3bd783358fb8d17d77c6f646530cd7bd293fa3d72e23da15ba8d4c"} Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.907069 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f","Type":"ContainerStarted","Data":"c3957257746aaaeeeb39e568b577b4523e4df7b3451c6a6409af66c85be62f70"} Jan 05 21:50:41 crc kubenswrapper[5000]: I0105 21:50:41.975329 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.019565797 podStartE2EDuration="47.97531025s" podCreationTimestamp="2026-01-05 21:49:54 +0000 UTC" firstStartedPulling="2026-01-05 21:50:27.958341098 +0000 UTC m=+982.914543567" lastFinishedPulling="2026-01-05 21:50:39.914085551 +0000 UTC m=+994.870288020" observedRunningTime="2026-01-05 21:50:41.965965384 +0000 UTC m=+996.922167853" watchObservedRunningTime="2026-01-05 21:50:41.97531025 +0000 UTC m=+996.931512719" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.237287 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.237521 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="dnsmasq-dns" containerID="cri-o://8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be" gracePeriod=10 Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276491 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276803 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="extract-content" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276814 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="extract-content" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276829 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="extract-utilities" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276835 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="extract-utilities" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276850 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa6ddda-7b19-4d81-b114-b887e43ce7e2" containerName="mariadb-account-create-update" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276855 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa6ddda-7b19-4d81-b114-b887e43ce7e2" containerName="mariadb-account-create-update" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276868 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="extract-content" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276874 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="extract-content" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276907 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="extract-utilities" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276914 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="extract-utilities" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276922 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276927 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: E0105 21:50:42.276941 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.276947 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.277112 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa6ddda-7b19-4d81-b114-b887e43ce7e2" containerName="mariadb-account-create-update" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.277126 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c136dd3d-0202-41d3-bdd8-6cc50947b925" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.277143 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ae58aa-f381-41a3-a1d3-04dec22b2ca7" containerName="registry-server" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.277956 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.280145 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.293927 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379226 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql99f\" (UniqueName: \"kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379289 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379321 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379371 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379395 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.379460 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481307 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481368 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481414 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481440 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481491 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.481518 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql99f\" (UniqueName: \"kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.482268 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.482362 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.482938 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.483600 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.484126 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.512770 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql99f\" (UniqueName: \"kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f\") pod \"dnsmasq-dns-74f6bcbc87-x7k22\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.641666 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:42 crc kubenswrapper[5000]: I0105 21:50:42.802594 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.948526 5000 generic.go:334] "Generic (PLEG): container finished" podID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerID="8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be" exitCode=0 Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.949791 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.950201 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" event={"ID":"a889aad2-1507-4494-ad0b-16e298f1cd4d","Type":"ContainerDied","Data":"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.950225 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-pvhrq" event={"ID":"a889aad2-1507-4494-ad0b-16e298f1cd4d","Type":"ContainerDied","Data":"cad2eb2a11e8114cdf6da9797f7d49e42362dec0caaa6707cfc0bae85767fba4"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.950242 5000 scope.go:117] "RemoveContainer" containerID="8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.989803 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkn8n\" (UniqueName: \"kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n\") pod \"a889aad2-1507-4494-ad0b-16e298f1cd4d\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.989859 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb\") pod \"a889aad2-1507-4494-ad0b-16e298f1cd4d\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.989927 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb\") pod \"a889aad2-1507-4494-ad0b-16e298f1cd4d\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.989992 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc\") pod \"a889aad2-1507-4494-ad0b-16e298f1cd4d\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:42.990037 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config\") pod \"a889aad2-1507-4494-ad0b-16e298f1cd4d\" (UID: \"a889aad2-1507-4494-ad0b-16e298f1cd4d\") " Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.008107 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n" (OuterVolumeSpecName: "kube-api-access-kkn8n") pod "a889aad2-1507-4494-ad0b-16e298f1cd4d" (UID: "a889aad2-1507-4494-ad0b-16e298f1cd4d"). InnerVolumeSpecName "kube-api-access-kkn8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.088752 5000 scope.go:117] "RemoveContainer" containerID="849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.089091 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a889aad2-1507-4494-ad0b-16e298f1cd4d" (UID: "a889aad2-1507-4494-ad0b-16e298f1cd4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.089682 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config" (OuterVolumeSpecName: "config") pod "a889aad2-1507-4494-ad0b-16e298f1cd4d" (UID: "a889aad2-1507-4494-ad0b-16e298f1cd4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.096240 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkn8n\" (UniqueName: \"kubernetes.io/projected/a889aad2-1507-4494-ad0b-16e298f1cd4d-kube-api-access-kkn8n\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.096262 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.096271 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.104133 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a889aad2-1507-4494-ad0b-16e298f1cd4d" (UID: "a889aad2-1507-4494-ad0b-16e298f1cd4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.107996 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.116356 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a889aad2-1507-4494-ad0b-16e298f1cd4d" (UID: "a889aad2-1507-4494-ad0b-16e298f1cd4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:43 crc kubenswrapper[5000]: W0105 21:50:43.122783 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c5ea572_39e7_4350_98d8_081a9c134f0e.slice/crio-a7fb7d8615836b56c5c80f81f4cea68564e7e0f228c7625497b91cf2a61d3a07 WatchSource:0}: Error finding container a7fb7d8615836b56c5c80f81f4cea68564e7e0f228c7625497b91cf2a61d3a07: Status 404 returned error can't find the container with id a7fb7d8615836b56c5c80f81f4cea68564e7e0f228c7625497b91cf2a61d3a07 Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.149860 5000 scope.go:117] "RemoveContainer" containerID="8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be" Jan 05 21:50:43 crc kubenswrapper[5000]: E0105 21:50:43.150308 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be\": container with ID starting with 8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be not found: ID does not exist" containerID="8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.150363 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be"} err="failed to get container status \"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be\": rpc error: code = NotFound desc = could not find container \"8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be\": container with ID starting with 8d94bbc803c8f5a3d140af647ffc292a674dac3544bb58b31b2414f5fa8bd2be not found: ID does not exist" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.150400 5000 scope.go:117] "RemoveContainer" containerID="849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347" Jan 05 21:50:43 crc kubenswrapper[5000]: E0105 21:50:43.150837 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347\": container with ID starting with 849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347 not found: ID does not exist" containerID="849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.150859 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347"} err="failed to get container status \"849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347\": rpc error: code = NotFound desc = could not find container \"849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347\": container with ID starting with 849e35a39734656f1866959835b2565b2149dd9f85151ae61f6f77765ae4f347 not found: ID does not exist" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.197513 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.197540 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a889aad2-1507-4494-ad0b-16e298f1cd4d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.289482 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.297810 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-pvhrq"] Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.332419 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" path="/var/lib/kubelet/pods/a889aad2-1507-4494-ad0b-16e298f1cd4d/volumes" Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.960321 5000 generic.go:334] "Generic (PLEG): container finished" podID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerID="4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f" exitCode=0 Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.960373 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" event={"ID":"3c5ea572-39e7-4350-98d8-081a9c134f0e","Type":"ContainerDied","Data":"4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.960702 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" event={"ID":"3c5ea572-39e7-4350-98d8-081a9c134f0e","Type":"ContainerStarted","Data":"a7fb7d8615836b56c5c80f81f4cea68564e7e0f228c7625497b91cf2a61d3a07"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.963782 5000 generic.go:334] "Generic (PLEG): container finished" podID="8e46dcd5-83ef-4a7b-a07b-a850071a330c" containerID="030df051cd17ef123b438baead653693d9fc0bcb2110e627dd98409882142999" exitCode=0 Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.963850 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6g5ww" event={"ID":"8e46dcd5-83ef-4a7b-a07b-a850071a330c","Type":"ContainerDied","Data":"030df051cd17ef123b438baead653693d9fc0bcb2110e627dd98409882142999"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.965386 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qrvsf" event={"ID":"4c462c92-9ae1-4351-bd0b-e97d442e2b6a","Type":"ContainerDied","Data":"156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6"} Jan 05 21:50:43 crc kubenswrapper[5000]: I0105 21:50:43.965415 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156483b8da16292326038cbba4de2f69509039616e2bff2489a3eeb9668934e6" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.113733 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.240716 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs8hj\" (UniqueName: \"kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj\") pod \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.241139 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts\") pod \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\" (UID: \"4c462c92-9ae1-4351-bd0b-e97d442e2b6a\") " Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.241469 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c462c92-9ae1-4351-bd0b-e97d442e2b6a" (UID: "4c462c92-9ae1-4351-bd0b-e97d442e2b6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.241683 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.246635 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj" (OuterVolumeSpecName: "kube-api-access-fs8hj") pod "4c462c92-9ae1-4351-bd0b-e97d442e2b6a" (UID: "4c462c92-9ae1-4351-bd0b-e97d442e2b6a"). InnerVolumeSpecName "kube-api-access-fs8hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.345050 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs8hj\" (UniqueName: \"kubernetes.io/projected/4c462c92-9ae1-4351-bd0b-e97d442e2b6a-kube-api-access-fs8hj\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.975108 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" event={"ID":"3c5ea572-39e7-4350-98d8-081a9c134f0e","Type":"ContainerStarted","Data":"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea"} Jan 05 21:50:44 crc kubenswrapper[5000]: I0105 21:50:44.975130 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qrvsf" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.007752 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" podStartSLOduration=3.007730525 podStartE2EDuration="3.007730525s" podCreationTimestamp="2026-01-05 21:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:45.001011953 +0000 UTC m=+999.957214432" watchObservedRunningTime="2026-01-05 21:50:45.007730525 +0000 UTC m=+999.963933004" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.395505 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.461982 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle\") pod \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.462029 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498mv\" (UniqueName: \"kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv\") pod \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.462101 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data\") pod \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\" (UID: \"8e46dcd5-83ef-4a7b-a07b-a850071a330c\") " Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.466198 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv" (OuterVolumeSpecName: "kube-api-access-498mv") pod "8e46dcd5-83ef-4a7b-a07b-a850071a330c" (UID: "8e46dcd5-83ef-4a7b-a07b-a850071a330c"). InnerVolumeSpecName "kube-api-access-498mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.484512 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e46dcd5-83ef-4a7b-a07b-a850071a330c" (UID: "8e46dcd5-83ef-4a7b-a07b-a850071a330c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.515865 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data" (OuterVolumeSpecName: "config-data") pod "8e46dcd5-83ef-4a7b-a07b-a850071a330c" (UID: "8e46dcd5-83ef-4a7b-a07b-a850071a330c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.563105 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.563370 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498mv\" (UniqueName: \"kubernetes.io/projected/8e46dcd5-83ef-4a7b-a07b-a850071a330c-kube-api-access-498mv\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.563385 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e46dcd5-83ef-4a7b-a07b-a850071a330c-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.984029 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6g5ww" event={"ID":"8e46dcd5-83ef-4a7b-a07b-a850071a330c","Type":"ContainerDied","Data":"a7354c3240c98faa5c10d715ad54e89d7ad1199519a236e0e93cebb5077f1672"} Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.984072 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7354c3240c98faa5c10d715ad54e89d7ad1199519a236e0e93cebb5077f1672" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.984052 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6g5ww" Jan 05 21:50:45 crc kubenswrapper[5000]: I0105 21:50:45.984165 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.301616 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2xk2z"] Jan 05 21:50:46 crc kubenswrapper[5000]: E0105 21:50:46.301945 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="init" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.301962 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="init" Jan 05 21:50:46 crc kubenswrapper[5000]: E0105 21:50:46.301975 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="dnsmasq-dns" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.301982 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="dnsmasq-dns" Jan 05 21:50:46 crc kubenswrapper[5000]: E0105 21:50:46.301995 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e46dcd5-83ef-4a7b-a07b-a850071a330c" containerName="keystone-db-sync" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302002 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e46dcd5-83ef-4a7b-a07b-a850071a330c" containerName="keystone-db-sync" Jan 05 21:50:46 crc kubenswrapper[5000]: E0105 21:50:46.302023 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c462c92-9ae1-4351-bd0b-e97d442e2b6a" containerName="mariadb-account-create-update" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302029 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c462c92-9ae1-4351-bd0b-e97d442e2b6a" containerName="mariadb-account-create-update" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302195 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a889aad2-1507-4494-ad0b-16e298f1cd4d" containerName="dnsmasq-dns" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302211 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c462c92-9ae1-4351-bd0b-e97d442e2b6a" containerName="mariadb-account-create-update" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302222 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e46dcd5-83ef-4a7b-a07b-a850071a330c" containerName="keystone-db-sync" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.302760 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.313001 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.313235 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.313412 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcmgb" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.313644 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.329864 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2xk2z"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.343199 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.381973 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.408846 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgps\" (UniqueName: \"kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.409056 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.409185 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.409259 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.409421 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.409484 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.464723 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.466258 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.477087 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.510915 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511206 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511230 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511280 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kgps\" (UniqueName: \"kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511299 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511320 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511370 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511412 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511429 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckc9\" (UniqueName: \"kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511459 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511474 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.511488 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.520494 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.521851 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.527212 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.527352 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tgl94" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.527581 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.528122 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.538870 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.540596 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.551882 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.552019 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.552158 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.561576 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kgps\" (UniqueName: \"kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps\") pod \"keystone-bootstrap-2xk2z\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.587688 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622292 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622504 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622571 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622687 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622762 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622843 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.622976 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.623042 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.623105 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpxx\" (UniqueName: \"kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.623177 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.623271 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ckc9\" (UniqueName: \"kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.624262 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.624802 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.625397 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.625978 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.626537 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.634342 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.683794 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-prdrd"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.699408 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.710773 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.716910 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ckc9\" (UniqueName: \"kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9\") pod \"dnsmasq-dns-847c4cc679-thcn6\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.733004 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.756809 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.755973 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-prdrd"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.717939 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.756981 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.757075 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.733424 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.718098 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j92rx" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.757476 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.757854 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.758158 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.758455 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zdp\" (UniqueName: \"kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.758668 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.758767 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.759052 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpxx\" (UniqueName: \"kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.780798 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.781461 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.812296 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.813834 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.853779 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.877811 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpxx\" (UniqueName: \"kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx\") pod \"horizon-7796c48dd9-nmfpk\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.917951 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.918037 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.918062 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.918089 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zdp\" (UniqueName: \"kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.918141 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.918158 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.919012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.927032 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.931681 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.932413 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.934093 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.940967 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.943333 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.948264 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.950329 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.950539 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.953277 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fz42m"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.954308 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.956136 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t67pj" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.957061 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.960205 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.968767 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fz42m"] Jan 05 21:50:46 crc kubenswrapper[5000]: I0105 21:50:46.975659 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zdp\" (UniqueName: \"kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp\") pod \"cinder-db-sync-prdrd\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.003597 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.010937 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-65jnl"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.012116 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.018816 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.018861 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j8sxz" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.024860 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.026485 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.039121 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dgtdq"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.040170 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.044002 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.044262 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-67b6v" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.044710 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.048182 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.065908 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-65jnl"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.104408 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.107010 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.112287 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.113299 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.113464 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.113619 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bqd9g" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.114754 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dgtdq"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120683 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120719 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnjn\" (UniqueName: \"kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120768 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120786 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120807 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120846 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.120879 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121486 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkgl\" (UniqueName: \"kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121541 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121563 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121625 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfqf\" (UniqueName: \"kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121644 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121664 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121688 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121708 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121733 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121754 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcqj\" (UniqueName: \"kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121770 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.121790 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.129684 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.142074 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.144250 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.151949 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.160860 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.167088 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-prdrd" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.168990 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.176530 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.179594 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.194993 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224002 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224058 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224089 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224143 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224162 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktnjn\" (UniqueName: \"kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224187 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224208 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224232 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224278 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2s4\" (UniqueName: \"kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224308 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224334 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224358 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224377 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224399 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224457 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224479 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgkgl\" (UniqueName: \"kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224522 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224550 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224573 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224669 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfqf\" (UniqueName: \"kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224698 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224727 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224765 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224797 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224820 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224846 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224872 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224917 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224948 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.224973 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.225009 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.225055 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcqj\" (UniqueName: \"kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.227342 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.230371 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.231313 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.233020 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.233878 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.235638 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.235938 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.236709 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.237601 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.239525 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.240143 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.242351 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.277189 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.277883 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgkgl\" (UniqueName: \"kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.278517 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts\") pod \"ceilometer-0\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.284818 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.288753 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktnjn\" (UniqueName: \"kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.289313 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfqf\" (UniqueName: \"kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf\") pod \"barbican-db-sync-65jnl\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.293849 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config\") pod \"neutron-db-sync-fz42m\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.296627 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fz42m" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.296933 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcqj\" (UniqueName: \"kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj\") pod \"dnsmasq-dns-785d8bcb8c-gwm8h\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335513 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335588 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335621 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335648 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335671 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335688 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335711 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335738 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335804 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335831 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335851 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335877 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335916 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335940 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdb6\" (UniqueName: \"kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335960 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.335981 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336001 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336021 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336045 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2s4\" (UniqueName: \"kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336078 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336107 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336129 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336155 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rgt\" (UniqueName: \"kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336210 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336249 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.336281 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.340082 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.340298 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.340918 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.342477 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.343797 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-65jnl" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.344577 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.344982 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.348401 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.352110 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.355837 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.356517 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.359753 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.359807 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.378161 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.390183 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2s4\" (UniqueName: \"kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4\") pod \"placement-db-sync-dgtdq\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.433651 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448298 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2xk2z"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448631 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448700 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448762 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448787 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448828 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdb6\" (UniqueName: \"kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448868 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448949 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.448985 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449011 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rgt\" (UniqueName: \"kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449034 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449066 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449116 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449127 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.449139 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.450167 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.456226 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.459599 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.460883 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.461228 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.462265 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.463408 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.467316 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdb6\" (UniqueName: \"kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6\") pod \"horizon-f9b6995df-77gt4\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.468298 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.468580 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.471004 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.479249 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.479487 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rgt\" (UniqueName: \"kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt\") pod \"glance-default-internal-api-0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.504174 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.600079 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.672445 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dgtdq" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.735325 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.767049 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:50:47 crc kubenswrapper[5000]: I0105 21:50:47.767399 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.043927 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fz42m"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.054916 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7796c48dd9-nmfpk" event={"ID":"e949189a-2b75-4081-949f-07ec69d377b5","Type":"ContainerStarted","Data":"9eb1583a3ee4160ced1e9c7842967199d3ad73a8854ab8396f548500842cc644"} Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.061553 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2xk2z" event={"ID":"a8727a20-e9f3-4991-bbd3-aa7d98f42be2","Type":"ContainerStarted","Data":"72bb3eef728f9d74ac05abeec34605f33032c1be7658d93bec56ec7bf8068cb6"} Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.065695 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" event={"ID":"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0","Type":"ContainerStarted","Data":"5b9fe02c3af154bbbe48512d35668972b93d63b11394963755957b1696b43a4d"} Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.065841 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="dnsmasq-dns" containerID="cri-o://545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea" gracePeriod=10 Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.174045 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-prdrd"] Jan 05 21:50:48 crc kubenswrapper[5000]: W0105 21:50:48.206921 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9d2e23_33f0_4563_a4a6_4b2aaf6adf00.slice/crio-f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2 WatchSource:0}: Error finding container f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2: Status 404 returned error can't find the container with id f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2 Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.254247 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:50:48 crc kubenswrapper[5000]: E0105 21:50:48.272448 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f99cca5_1c17_45fb_8cfd_3f9a4f8f05a0.slice/crio-e8b59ce485ce9aba2707f774d4a81d82ea85a56ca7f92645174f1621b6ad89df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f99cca5_1c17_45fb_8cfd_3f9a4f8f05a0.slice/crio-conmon-e8b59ce485ce9aba2707f774d4a81d82ea85a56ca7f92645174f1621b6ad89df.scope\": RecentStats: unable to find data in memory cache]" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.280163 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.591960 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.640361 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.700570 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.702651 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.733620 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:50:48 crc kubenswrapper[5000]: W0105 21:50:48.742082 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36acfd32_be57_4078_a5a6_b31cf5608620.slice/crio-01a54c25a964ed21dc9da2fd310620f4c15ff3834a121e638f5169f83f58e403 WatchSource:0}: Error finding container 01a54c25a964ed21dc9da2fd310620f4c15ff3834a121e638f5169f83f58e403: Status 404 returned error can't find the container with id 01a54c25a964ed21dc9da2fd310620f4c15ff3834a121e638f5169f83f58e403 Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.770323 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.777506 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.777556 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnjg\" (UniqueName: \"kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.777583 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.777609 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.777632 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.792813 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.805545 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dgtdq"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.813467 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-65jnl"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.884796 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnjg\" (UniqueName: \"kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.884939 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.884990 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.885036 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.885250 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.886314 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.887535 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.899915 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.901867 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.928107 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.928445 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnjg\" (UniqueName: \"kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.928484 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key\") pod \"horizon-77cd8467c9-7zlz2\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:48 crc kubenswrapper[5000]: I0105 21:50:48.981330 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.000092 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087518 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087601 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087639 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087727 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087847 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.087869 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql99f\" (UniqueName: \"kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.091252 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-65jnl" event={"ID":"ce305106-1701-4e2e-b87a-fc358e9c99d2","Type":"ContainerStarted","Data":"0b82287c0d53e511fbaa98a38b3dc419424d8a02f94b567f69ec07df40cdc73a"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.101294 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-prdrd" event={"ID":"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00","Type":"ContainerStarted","Data":"f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.105787 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f" (OuterVolumeSpecName: "kube-api-access-ql99f") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "kube-api-access-ql99f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.106325 5000 generic.go:334] "Generic (PLEG): container finished" podID="0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" containerID="e8b59ce485ce9aba2707f774d4a81d82ea85a56ca7f92645174f1621b6ad89df" exitCode=0 Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.106382 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" event={"ID":"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0","Type":"ContainerDied","Data":"e8b59ce485ce9aba2707f774d4a81d82ea85a56ca7f92645174f1621b6ad89df"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.112617 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dgtdq" event={"ID":"faf9d2c1-13d7-4475-a978-9b02ccb6374d","Type":"ContainerStarted","Data":"b55eeb48473b3d1059c53e6d0b67386e441359e5d9077d13878ad9117b81269d"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.116961 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerStarted","Data":"01a54c25a964ed21dc9da2fd310620f4c15ff3834a121e638f5169f83f58e403"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.120262 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fz42m" event={"ID":"c51a1013-b3ea-444a-b578-6cfc91b1c283","Type":"ContainerStarted","Data":"1921ce77217983e532707ba4a5e2db0081860093ee5d7ceab50184dbdaeb2591"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.120290 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fz42m" event={"ID":"c51a1013-b3ea-444a-b578-6cfc91b1c283","Type":"ContainerStarted","Data":"5fe3be19a6b19705f41d2eee3c85ba93510d2be72a1fe561cb00b6e5ccd8098e"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.122443 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2xk2z" event={"ID":"a8727a20-e9f3-4991-bbd3-aa7d98f42be2","Type":"ContainerStarted","Data":"e0b18519ef40111898fa1ffb641755a894426e7ecc98e8f0fd4a230ad39f2bc5"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.134859 5000 generic.go:334] "Generic (PLEG): container finished" podID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerID="545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea" exitCode=0 Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.135407 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.135870 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" event={"ID":"3c5ea572-39e7-4350-98d8-081a9c134f0e","Type":"ContainerDied","Data":"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.135919 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-x7k22" event={"ID":"3c5ea572-39e7-4350-98d8-081a9c134f0e","Type":"ContainerDied","Data":"a7fb7d8615836b56c5c80f81f4cea68564e7e0f228c7625497b91cf2a61d3a07"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.135934 5000 scope.go:117] "RemoveContainer" containerID="545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.148451 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerStarted","Data":"4f3d2fcc8ab7accc13757566dcb6d31489397af2c8c6bde0a58d47e4724e911c"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.153688 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config" (OuterVolumeSpecName: "config") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.155407 5000 generic.go:334] "Generic (PLEG): container finished" podID="f6024769-eb72-4852-9278-e86730c00512" containerID="030da89c0f42622ed60d330e21c5cf4f5b0acac8b4053fbc3422cbd0c8ff5071" exitCode=0 Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.155464 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" event={"ID":"f6024769-eb72-4852-9278-e86730c00512","Type":"ContainerDied","Data":"030da89c0f42622ed60d330e21c5cf4f5b0acac8b4053fbc3422cbd0c8ff5071"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.155489 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" event={"ID":"f6024769-eb72-4852-9278-e86730c00512","Type":"ContainerStarted","Data":"f8a31bb0eb95d427650a1f7bb99aa89eb1fe3f40766c1b6e8bec16fc85a525f4"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.161069 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.161111 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerStarted","Data":"6894f814c13e101b937449380071d1692093c5d90982cbc0f048437b41ebd27d"} Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.165728 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.186604 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2xk2z" podStartSLOduration=3.18658149 podStartE2EDuration="3.18658149s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:49.183385479 +0000 UTC m=+1004.139587948" watchObservedRunningTime="2026-01-05 21:50:49.18658149 +0000 UTC m=+1004.142783959" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.188700 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.189095 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.189688 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") pod \"3c5ea572-39e7-4350-98d8-081a9c134f0e\" (UID: \"3c5ea572-39e7-4350-98d8-081a9c134f0e\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190226 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190239 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql99f\" (UniqueName: \"kubernetes.io/projected/3c5ea572-39e7-4350-98d8-081a9c134f0e-kube-api-access-ql99f\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190248 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190255 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190297 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: W0105 21:50:49.190530 5000 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3c5ea572-39e7-4350-98d8-081a9c134f0e/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.190538 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c5ea572-39e7-4350-98d8-081a9c134f0e" (UID: "3c5ea572-39e7-4350-98d8-081a9c134f0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.200719 5000 scope.go:117] "RemoveContainer" containerID="4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.205221 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fz42m" podStartSLOduration=3.20520622 podStartE2EDuration="3.20520622s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:49.201353871 +0000 UTC m=+1004.157556340" watchObservedRunningTime="2026-01-05 21:50:49.20520622 +0000 UTC m=+1004.161408689" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.280625 5000 scope.go:117] "RemoveContainer" containerID="545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea" Jan 05 21:50:49 crc kubenswrapper[5000]: E0105 21:50:49.281454 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea\": container with ID starting with 545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea not found: ID does not exist" containerID="545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.281491 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea"} err="failed to get container status \"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea\": rpc error: code = NotFound desc = could not find container \"545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea\": container with ID starting with 545974cd6b41f1206f9c9bf471d3c26f6eeb6baf9a062b6f84d5ea0d35ade5ea not found: ID does not exist" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.281517 5000 scope.go:117] "RemoveContainer" containerID="4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f" Jan 05 21:50:49 crc kubenswrapper[5000]: E0105 21:50:49.287745 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f\": container with ID starting with 4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f not found: ID does not exist" containerID="4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.287790 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f"} err="failed to get container status \"4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f\": rpc error: code = NotFound desc = could not find container \"4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f\": container with ID starting with 4bec26acaa16a245e2de4baac13add0f0c05986da8cba47eb05cead1d97b4e5f not found: ID does not exist" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.292464 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c5ea572-39e7-4350-98d8-081a9c134f0e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.467785 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.512026 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-x7k22"] Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.642421 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.815245 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901279 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901324 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901344 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901421 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901507 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.901532 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ckc9\" (UniqueName: \"kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9\") pod \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\" (UID: \"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0\") " Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.909121 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9" (OuterVolumeSpecName: "kube-api-access-5ckc9") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "kube-api-access-5ckc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.923628 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.928517 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.933959 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.938925 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config" (OuterVolumeSpecName: "config") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.939853 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" (UID: "0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:50:49 crc kubenswrapper[5000]: I0105 21:50:49.959838 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003363 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003400 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003413 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ckc9\" (UniqueName: \"kubernetes.io/projected/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-kube-api-access-5ckc9\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003422 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003432 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.003442 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.171146 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerStarted","Data":"f154fac3ca0feff5ae725113e2becb5521fbc597af4beaf31edd38fe299ed4b8"} Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.174637 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" event={"ID":"0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0","Type":"ContainerDied","Data":"5b9fe02c3af154bbbe48512d35668972b93d63b11394963755957b1696b43a4d"} Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.174653 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-thcn6" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.174685 5000 scope.go:117] "RemoveContainer" containerID="e8b59ce485ce9aba2707f774d4a81d82ea85a56ca7f92645174f1621b6ad89df" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.180828 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" event={"ID":"f6024769-eb72-4852-9278-e86730c00512","Type":"ContainerStarted","Data":"c7231c3da6807b778877dbde450a8270eabdada9d4c18b0fa7e341a8cbba7637"} Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.181811 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.198425 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerStarted","Data":"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff"} Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.200449 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cd8467c9-7zlz2" event={"ID":"b603c84e-b4e1-45e2-af6b-de4905867cf6","Type":"ContainerStarted","Data":"ac791af01c33ee2db16404db2320abc153e8e78a03e08a23ebd0663cad6ae45e"} Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.204486 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" podStartSLOduration=4.204466547 podStartE2EDuration="4.204466547s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:50.197756705 +0000 UTC m=+1005.153959174" watchObservedRunningTime="2026-01-05 21:50:50.204466547 +0000 UTC m=+1005.160669016" Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.248924 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:50 crc kubenswrapper[5000]: I0105 21:50:50.254064 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-thcn6"] Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.215088 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerStarted","Data":"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69"} Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.215368 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-log" containerID="cri-o://5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" gracePeriod=30 Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.215932 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-httpd" containerID="cri-o://b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" gracePeriod=30 Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.218549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerStarted","Data":"f62c1c02e74a9b63f41d6f4cb04984ea6a06f67416a3d26afef14e38a7909aa7"} Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.237086 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.237069882 podStartE2EDuration="5.237069882s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:51.231644117 +0000 UTC m=+1006.187846586" watchObservedRunningTime="2026-01-05 21:50:51.237069882 +0000 UTC m=+1006.193272351" Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.334748 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" path="/var/lib/kubelet/pods/0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0/volumes" Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.336657 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" path="/var/lib/kubelet/pods/3c5ea572-39e7-4350-98d8-081a9c134f0e/volumes" Jan 05 21:50:51 crc kubenswrapper[5000]: I0105 21:50:51.900127 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058564 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058720 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058748 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058798 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058874 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.058917 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.059028 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.059080 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts\") pod \"5938c024-b918-43ab-a3e0-269af3da802b\" (UID: \"5938c024-b918-43ab-a3e0-269af3da802b\") " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.060187 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.060869 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs" (OuterVolumeSpecName: "logs") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.066613 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.073117 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts" (OuterVolumeSpecName: "scripts") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.073171 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf" (OuterVolumeSpecName: "kube-api-access-dwtbf") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "kube-api-access-dwtbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.094208 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.138458 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.140789 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data" (OuterVolumeSpecName: "config-data") pod "5938c024-b918-43ab-a3e0-269af3da802b" (UID: "5938c024-b918-43ab-a3e0-269af3da802b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161185 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161216 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161226 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161253 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161264 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwtbf\" (UniqueName: \"kubernetes.io/projected/5938c024-b918-43ab-a3e0-269af3da802b-kube-api-access-dwtbf\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161274 5000 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161282 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5938c024-b918-43ab-a3e0-269af3da802b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.161290 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5938c024-b918-43ab-a3e0-269af3da802b-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.180337 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.247515 5000 generic.go:334] "Generic (PLEG): container finished" podID="5938c024-b918-43ab-a3e0-269af3da802b" containerID="b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" exitCode=143 Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.247836 5000 generic.go:334] "Generic (PLEG): container finished" podID="5938c024-b918-43ab-a3e0-269af3da802b" containerID="5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" exitCode=143 Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.247883 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerDied","Data":"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69"} Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.247969 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.247964 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerDied","Data":"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff"} Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.248000 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5938c024-b918-43ab-a3e0-269af3da802b","Type":"ContainerDied","Data":"6894f814c13e101b937449380071d1692093c5d90982cbc0f048437b41ebd27d"} Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.248021 5000 scope.go:117] "RemoveContainer" containerID="b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.253336 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerStarted","Data":"e9fe46d20e48e523e7b9e9a72ec8e19912f5821530662c5140d033e9017183ef"} Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.262125 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.289589 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.300272 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311272 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:52 crc kubenswrapper[5000]: E0105 21:50:52.311708 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-httpd" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311725 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-httpd" Jan 05 21:50:52 crc kubenswrapper[5000]: E0105 21:50:52.311736 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-log" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311745 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-log" Jan 05 21:50:52 crc kubenswrapper[5000]: E0105 21:50:52.311757 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" containerName="init" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311766 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" containerName="init" Jan 05 21:50:52 crc kubenswrapper[5000]: E0105 21:50:52.311793 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="init" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311801 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="init" Jan 05 21:50:52 crc kubenswrapper[5000]: E0105 21:50:52.311815 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="dnsmasq-dns" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.311822 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="dnsmasq-dns" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.312052 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5ea572-39e7-4350-98d8-081a9c134f0e" containerName="dnsmasq-dns" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.312086 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-log" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.312325 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f99cca5-1c17-45fb-8cfd-3f9a4f8f05a0" containerName="init" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.312356 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="5938c024-b918-43ab-a3e0-269af3da802b" containerName="glance-httpd" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.313441 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.318701 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.319224 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.332020 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465484 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ltt\" (UniqueName: \"kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465567 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465596 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465621 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465679 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465718 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465742 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.465763 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567489 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7ltt\" (UniqueName: \"kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567589 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567618 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567644 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567680 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567717 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567737 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.567758 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.568155 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.570643 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.570847 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.572613 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.573203 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.574547 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.582884 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.593790 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7ltt\" (UniqueName: \"kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.618071 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " pod="openstack/glance-default-external-api-0" Jan 05 21:50:52 crc kubenswrapper[5000]: I0105 21:50:52.668193 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.267677 5000 generic.go:334] "Generic (PLEG): container finished" podID="a8727a20-e9f3-4991-bbd3-aa7d98f42be2" containerID="e0b18519ef40111898fa1ffb641755a894426e7ecc98e8f0fd4a230ad39f2bc5" exitCode=0 Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.267770 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2xk2z" event={"ID":"a8727a20-e9f3-4991-bbd3-aa7d98f42be2","Type":"ContainerDied","Data":"e0b18519ef40111898fa1ffb641755a894426e7ecc98e8f0fd4a230ad39f2bc5"} Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.267842 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-log" containerID="cri-o://f62c1c02e74a9b63f41d6f4cb04984ea6a06f67416a3d26afef14e38a7909aa7" gracePeriod=30 Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.267961 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-httpd" containerID="cri-o://e9fe46d20e48e523e7b9e9a72ec8e19912f5821530662c5140d033e9017183ef" gracePeriod=30 Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.300301 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.299158946 podStartE2EDuration="7.299158946s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:50:53.29263481 +0000 UTC m=+1008.248837289" watchObservedRunningTime="2026-01-05 21:50:53.299158946 +0000 UTC m=+1008.255361415" Jan 05 21:50:53 crc kubenswrapper[5000]: I0105 21:50:53.337590 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5938c024-b918-43ab-a3e0-269af3da802b" path="/var/lib/kubelet/pods/5938c024-b918-43ab-a3e0-269af3da802b/volumes" Jan 05 21:50:54 crc kubenswrapper[5000]: I0105 21:50:54.283626 5000 generic.go:334] "Generic (PLEG): container finished" podID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerID="e9fe46d20e48e523e7b9e9a72ec8e19912f5821530662c5140d033e9017183ef" exitCode=0 Jan 05 21:50:54 crc kubenswrapper[5000]: I0105 21:50:54.283992 5000 generic.go:334] "Generic (PLEG): container finished" podID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerID="f62c1c02e74a9b63f41d6f4cb04984ea6a06f67416a3d26afef14e38a7909aa7" exitCode=143 Jan 05 21:50:54 crc kubenswrapper[5000]: I0105 21:50:54.284041 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerDied","Data":"e9fe46d20e48e523e7b9e9a72ec8e19912f5821530662c5140d033e9017183ef"} Jan 05 21:50:54 crc kubenswrapper[5000]: I0105 21:50:54.284084 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerDied","Data":"f62c1c02e74a9b63f41d6f4cb04984ea6a06f67416a3d26afef14e38a7909aa7"} Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.346083 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.388224 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.390048 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.394242 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.427594 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.444934 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532237 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532276 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532309 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xgh\" (UniqueName: \"kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532402 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532431 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532455 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.532492 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.537615 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.562003 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f48b4784d-5jgvr"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.563934 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.587878 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f48b4784d-5jgvr"] Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634175 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634218 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634249 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xgh\" (UniqueName: \"kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634316 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634337 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634360 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.634395 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.635838 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.636109 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.637834 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.642568 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.642791 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.654358 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xgh\" (UniqueName: \"kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.659793 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle\") pod \"horizon-65d5455f76-k75ww\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.723982 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.736865 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-secret-key\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.736961 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-combined-ca-bundle\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.736991 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed51a505-1c96-4f98-879e-75283649a949-logs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.737016 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj88k\" (UniqueName: \"kubernetes.io/projected/ed51a505-1c96-4f98-879e-75283649a949-kube-api-access-gj88k\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.737077 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-tls-certs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.737148 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-config-data\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.737196 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-scripts\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839122 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-secret-key\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839162 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-combined-ca-bundle\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839186 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed51a505-1c96-4f98-879e-75283649a949-logs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839205 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj88k\" (UniqueName: \"kubernetes.io/projected/ed51a505-1c96-4f98-879e-75283649a949-kube-api-access-gj88k\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839247 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-tls-certs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839297 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-config-data\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839327 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-scripts\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.839921 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-scripts\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.840307 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed51a505-1c96-4f98-879e-75283649a949-logs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.843972 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-combined-ca-bundle\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.844067 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-secret-key\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.844336 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed51a505-1c96-4f98-879e-75283649a949-horizon-tls-certs\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.844750 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed51a505-1c96-4f98-879e-75283649a949-config-data\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.860833 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj88k\" (UniqueName: \"kubernetes.io/projected/ed51a505-1c96-4f98-879e-75283649a949-kube-api-access-gj88k\") pod \"horizon-6f48b4784d-5jgvr\" (UID: \"ed51a505-1c96-4f98-879e-75283649a949\") " pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:55 crc kubenswrapper[5000]: I0105 21:50:55.884797 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:50:57 crc kubenswrapper[5000]: I0105 21:50:57.362159 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:50:57 crc kubenswrapper[5000]: I0105 21:50:57.416730 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:50:57 crc kubenswrapper[5000]: I0105 21:50:57.417406 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-k8prf" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" containerID="cri-o://61f168b9c80051e15502e7c92859ec86a555a8087198db14f4888a76dc6d70dd" gracePeriod=10 Jan 05 21:50:58 crc kubenswrapper[5000]: I0105 21:50:58.332433 5000 generic.go:334] "Generic (PLEG): container finished" podID="bf40f774-440a-4644-9324-66f2c7d2647e" containerID="61f168b9c80051e15502e7c92859ec86a555a8087198db14f4888a76dc6d70dd" exitCode=0 Jan 05 21:50:58 crc kubenswrapper[5000]: I0105 21:50:58.332538 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8prf" event={"ID":"bf40f774-440a-4644-9324-66f2c7d2647e","Type":"ContainerDied","Data":"61f168b9c80051e15502e7c92859ec86a555a8087198db14f4888a76dc6d70dd"} Jan 05 21:50:59 crc kubenswrapper[5000]: I0105 21:50:59.251072 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8prf" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 05 21:51:03 crc kubenswrapper[5000]: E0105 21:51:03.189218 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 05 21:51:03 crc kubenswrapper[5000]: E0105 21:51:03.189762 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh5c5h675h648h6ch5f7h68fh5bfh8dhddh656h68fh8hfbh7fhcbhc4h66dh4hcch557h5d6h9ch589h54ch57ch94h645h594h584hddh6cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzpxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7796c48dd9-nmfpk_openstack(e949189a-2b75-4081-949f-07ec69d377b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:03 crc kubenswrapper[5000]: E0105 21:51:03.191747 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7796c48dd9-nmfpk" podUID="e949189a-2b75-4081-949f-07ec69d377b5" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.260265 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.379532 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2xk2z" event={"ID":"a8727a20-e9f3-4991-bbd3-aa7d98f42be2","Type":"ContainerDied","Data":"72bb3eef728f9d74ac05abeec34605f33032c1be7658d93bec56ec7bf8068cb6"} Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.379574 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2xk2z" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.379586 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72bb3eef728f9d74ac05abeec34605f33032c1be7658d93bec56ec7bf8068cb6" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393315 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393409 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393443 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kgps\" (UniqueName: \"kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393534 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393595 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.393696 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys\") pod \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\" (UID: \"a8727a20-e9f3-4991-bbd3-aa7d98f42be2\") " Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.401555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.402159 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts" (OuterVolumeSpecName: "scripts") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.416366 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.432965 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps" (OuterVolumeSpecName: "kube-api-access-4kgps") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "kube-api-access-4kgps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.458386 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data" (OuterVolumeSpecName: "config-data") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.462382 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8727a20-e9f3-4991-bbd3-aa7d98f42be2" (UID: "a8727a20-e9f3-4991-bbd3-aa7d98f42be2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496243 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496270 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496280 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kgps\" (UniqueName: \"kubernetes.io/projected/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-kube-api-access-4kgps\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496292 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496301 5000 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:03 crc kubenswrapper[5000]: I0105 21:51:03.496309 5000 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a8727a20-e9f3-4991-bbd3-aa7d98f42be2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.250704 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8prf" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.333846 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2xk2z"] Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.342536 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2xk2z"] Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.463972 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nhpcs"] Jan 05 21:51:04 crc kubenswrapper[5000]: E0105 21:51:04.464461 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8727a20-e9f3-4991-bbd3-aa7d98f42be2" containerName="keystone-bootstrap" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.464479 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8727a20-e9f3-4991-bbd3-aa7d98f42be2" containerName="keystone-bootstrap" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.464704 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8727a20-e9f3-4991-bbd3-aa7d98f42be2" containerName="keystone-bootstrap" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.465461 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.469789 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.470264 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.474277 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcmgb" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.474505 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.474650 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.477941 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhpcs"] Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.625570 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.625622 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.625785 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.625851 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.626019 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8jh\" (UniqueName: \"kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.626054 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728428 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728606 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728644 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728690 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728716 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.728851 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8jh\" (UniqueName: \"kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.733412 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.733781 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.733898 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.734305 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.735007 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.749299 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8jh\" (UniqueName: \"kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh\") pod \"keystone-bootstrap-nhpcs\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:04 crc kubenswrapper[5000]: E0105 21:51:04.786723 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 05 21:51:04 crc kubenswrapper[5000]: E0105 21:51:04.787126 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9g2s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-dgtdq_openstack(faf9d2c1-13d7-4475-a978-9b02ccb6374d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:04 crc kubenswrapper[5000]: E0105 21:51:04.788494 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-dgtdq" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" Jan 05 21:51:04 crc kubenswrapper[5000]: I0105 21:51:04.791866 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.211736 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.212259 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n594h8fh98h5b8h649h657hddh87h88h557hf9h684h7dh65fh595h68dhd7hf5h5c7h56ch86h67hd9h59fh655h59fhd7h566h54hcfh66ch595q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(77e33e26-6a57-4f48-9d16-3bb5502b1f76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.231823 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.232031 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6dh94h588h68h5c5h64h577h5d8h5d7h5cch668h5bfh696h55fh57h5d4h58dh5d4h8ch558h57dhdh65dh55dh66dh68chfdh5h5dch649h575h577q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plnjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77cd8467c9-7zlz2_openstack(b603c84e-b4e1-45e2-af6b-de4905867cf6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.240122 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-77cd8467c9-7zlz2" podUID="b603c84e-b4e1-45e2-af6b-de4905867cf6" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.294256 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.343830 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8727a20-e9f3-4991-bbd3-aa7d98f42be2" path="/var/lib/kubelet/pods/a8727a20-e9f3-4991-bbd3-aa7d98f42be2/volumes" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.399428 5000 generic.go:334] "Generic (PLEG): container finished" podID="c51a1013-b3ea-444a-b578-6cfc91b1c283" containerID="1921ce77217983e532707ba4a5e2db0081860093ee5d7ceab50184dbdaeb2591" exitCode=0 Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.399464 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fz42m" event={"ID":"c51a1013-b3ea-444a-b578-6cfc91b1c283","Type":"ContainerDied","Data":"1921ce77217983e532707ba4a5e2db0081860093ee5d7ceab50184dbdaeb2591"} Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.402252 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0","Type":"ContainerDied","Data":"f154fac3ca0feff5ae725113e2becb5521fbc597af4beaf31edd38fe299ed4b8"} Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.402324 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.403558 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-dgtdq" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441152 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rgt\" (UniqueName: \"kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441204 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441275 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441296 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441360 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441416 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441468 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.441518 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data\") pod \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\" (UID: \"c9b0a5bd-f771-4b70-856b-10f8e4a72dc0\") " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.442173 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.442332 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs" (OuterVolumeSpecName: "logs") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.446982 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts" (OuterVolumeSpecName: "scripts") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.447106 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt" (OuterVolumeSpecName: "kube-api-access-h4rgt") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "kube-api-access-h4rgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.448555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.469008 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.487919 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data" (OuterVolumeSpecName: "config-data") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.494799 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" (UID: "c9b0a5bd-f771-4b70-856b-10f8e4a72dc0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543494 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543537 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543553 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543567 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543582 5000 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543594 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543606 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rgt\" (UniqueName: \"kubernetes.io/projected/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-kube-api-access-h4rgt\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.543619 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.574992 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.645218 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.753007 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.776326 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.794078 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.794550 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-httpd" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.794575 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-httpd" Jan 05 21:51:05 crc kubenswrapper[5000]: E0105 21:51:05.794627 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-log" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.794637 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-log" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.794858 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-log" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.794897 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" containerName="glance-httpd" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.796064 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.798389 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.799707 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.805830 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.954856 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.954935 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.954978 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.955006 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.955031 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.955358 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.955404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:05 crc kubenswrapper[5000]: I0105 21:51:05.955434 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzg4h\" (UniqueName: \"kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057234 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057281 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057310 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057330 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057345 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057410 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057429 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057447 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzg4h\" (UniqueName: \"kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.057912 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.058139 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.058406 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.061754 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.062298 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.062742 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.063000 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.079795 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzg4h\" (UniqueName: \"kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.090005 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:51:06 crc kubenswrapper[5000]: I0105 21:51:06.133097 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:07 crc kubenswrapper[5000]: I0105 21:51:07.336526 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b0a5bd-f771-4b70-856b-10f8e4a72dc0" path="/var/lib/kubelet/pods/c9b0a5bd-f771-4b70-856b-10f8e4a72dc0/volumes" Jan 05 21:51:09 crc kubenswrapper[5000]: I0105 21:51:09.251315 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8prf" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 05 21:51:09 crc kubenswrapper[5000]: I0105 21:51:09.251704 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:51:12 crc kubenswrapper[5000]: E0105 21:51:12.717082 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 05 21:51:12 crc kubenswrapper[5000]: E0105 21:51:12.717719 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzfqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-65jnl_openstack(ce305106-1701-4e2e-b87a-fc358e9c99d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:12 crc kubenswrapper[5000]: E0105 21:51:12.719566 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-65jnl" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.742757 5000 scope.go:117] "RemoveContainer" containerID="5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.841153 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.848055 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.853862 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fz42m" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984081 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs\") pod \"e949189a-2b75-4081-949f-07ec69d377b5\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984415 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs" (OuterVolumeSpecName: "logs") pod "e949189a-2b75-4081-949f-07ec69d377b5" (UID: "e949189a-2b75-4081-949f-07ec69d377b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984438 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plnjg\" (UniqueName: \"kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg\") pod \"b603c84e-b4e1-45e2-af6b-de4905867cf6\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984483 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key\") pod \"b603c84e-b4e1-45e2-af6b-de4905867cf6\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984541 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data\") pod \"b603c84e-b4e1-45e2-af6b-de4905867cf6\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984569 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktnjn\" (UniqueName: \"kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn\") pod \"c51a1013-b3ea-444a-b578-6cfc91b1c283\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984592 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts\") pod \"e949189a-2b75-4081-949f-07ec69d377b5\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984619 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzpxx\" (UniqueName: \"kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx\") pod \"e949189a-2b75-4081-949f-07ec69d377b5\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984645 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle\") pod \"c51a1013-b3ea-444a-b578-6cfc91b1c283\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984664 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data\") pod \"e949189a-2b75-4081-949f-07ec69d377b5\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984709 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts\") pod \"b603c84e-b4e1-45e2-af6b-de4905867cf6\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.984738 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config\") pod \"c51a1013-b3ea-444a-b578-6cfc91b1c283\" (UID: \"c51a1013-b3ea-444a-b578-6cfc91b1c283\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.985294 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data" (OuterVolumeSpecName: "config-data") pod "e949189a-2b75-4081-949f-07ec69d377b5" (UID: "e949189a-2b75-4081-949f-07ec69d377b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.985364 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key\") pod \"e949189a-2b75-4081-949f-07ec69d377b5\" (UID: \"e949189a-2b75-4081-949f-07ec69d377b5\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986010 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs\") pod \"b603c84e-b4e1-45e2-af6b-de4905867cf6\" (UID: \"b603c84e-b4e1-45e2-af6b-de4905867cf6\") " Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.985665 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts" (OuterVolumeSpecName: "scripts") pod "e949189a-2b75-4081-949f-07ec69d377b5" (UID: "e949189a-2b75-4081-949f-07ec69d377b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986192 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts" (OuterVolumeSpecName: "scripts") pod "b603c84e-b4e1-45e2-af6b-de4905867cf6" (UID: "b603c84e-b4e1-45e2-af6b-de4905867cf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986397 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e949189a-2b75-4081-949f-07ec69d377b5-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986412 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986420 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e949189a-2b75-4081-949f-07ec69d377b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986428 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.986462 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs" (OuterVolumeSpecName: "logs") pod "b603c84e-b4e1-45e2-af6b-de4905867cf6" (UID: "b603c84e-b4e1-45e2-af6b-de4905867cf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.988024 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data" (OuterVolumeSpecName: "config-data") pod "b603c84e-b4e1-45e2-af6b-de4905867cf6" (UID: "b603c84e-b4e1-45e2-af6b-de4905867cf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.991490 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b603c84e-b4e1-45e2-af6b-de4905867cf6" (UID: "b603c84e-b4e1-45e2-af6b-de4905867cf6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.991606 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e949189a-2b75-4081-949f-07ec69d377b5" (UID: "e949189a-2b75-4081-949f-07ec69d377b5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.992101 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx" (OuterVolumeSpecName: "kube-api-access-xzpxx") pod "e949189a-2b75-4081-949f-07ec69d377b5" (UID: "e949189a-2b75-4081-949f-07ec69d377b5"). InnerVolumeSpecName "kube-api-access-xzpxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:12 crc kubenswrapper[5000]: I0105 21:51:12.992239 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn" (OuterVolumeSpecName: "kube-api-access-ktnjn") pod "c51a1013-b3ea-444a-b578-6cfc91b1c283" (UID: "c51a1013-b3ea-444a-b578-6cfc91b1c283"). InnerVolumeSpecName "kube-api-access-ktnjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.005148 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg" (OuterVolumeSpecName: "kube-api-access-plnjg") pod "b603c84e-b4e1-45e2-af6b-de4905867cf6" (UID: "b603c84e-b4e1-45e2-af6b-de4905867cf6"). InnerVolumeSpecName "kube-api-access-plnjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.020118 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c51a1013-b3ea-444a-b578-6cfc91b1c283" (UID: "c51a1013-b3ea-444a-b578-6cfc91b1c283"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.020177 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config" (OuterVolumeSpecName: "config") pod "c51a1013-b3ea-444a-b578-6cfc91b1c283" (UID: "c51a1013-b3ea-444a-b578-6cfc91b1c283"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088128 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plnjg\" (UniqueName: \"kubernetes.io/projected/b603c84e-b4e1-45e2-af6b-de4905867cf6-kube-api-access-plnjg\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088162 5000 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b603c84e-b4e1-45e2-af6b-de4905867cf6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088174 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b603c84e-b4e1-45e2-af6b-de4905867cf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088183 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktnjn\" (UniqueName: \"kubernetes.io/projected/c51a1013-b3ea-444a-b578-6cfc91b1c283-kube-api-access-ktnjn\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088192 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzpxx\" (UniqueName: \"kubernetes.io/projected/e949189a-2b75-4081-949f-07ec69d377b5-kube-api-access-xzpxx\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088202 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088211 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c51a1013-b3ea-444a-b578-6cfc91b1c283-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088222 5000 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e949189a-2b75-4081-949f-07ec69d377b5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.088232 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b603c84e-b4e1-45e2-af6b-de4905867cf6-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.462582 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fz42m" event={"ID":"c51a1013-b3ea-444a-b578-6cfc91b1c283","Type":"ContainerDied","Data":"5fe3be19a6b19705f41d2eee3c85ba93510d2be72a1fe561cb00b6e5ccd8098e"} Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.462615 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fz42m" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.462628 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe3be19a6b19705f41d2eee3c85ba93510d2be72a1fe561cb00b6e5ccd8098e" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.464287 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7796c48dd9-nmfpk" event={"ID":"e949189a-2b75-4081-949f-07ec69d377b5","Type":"ContainerDied","Data":"9eb1583a3ee4160ced1e9c7842967199d3ad73a8854ab8396f548500842cc644"} Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.464375 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796c48dd9-nmfpk" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.470018 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77cd8467c9-7zlz2" event={"ID":"b603c84e-b4e1-45e2-af6b-de4905867cf6","Type":"ContainerDied","Data":"ac791af01c33ee2db16404db2320abc153e8e78a03e08a23ebd0663cad6ae45e"} Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.470121 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77cd8467c9-7zlz2" Jan 05 21:51:13 crc kubenswrapper[5000]: E0105 21:51:13.472300 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-65jnl" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.518933 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.525464 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7796c48dd9-nmfpk"] Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.557063 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:51:13 crc kubenswrapper[5000]: I0105 21:51:13.564519 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77cd8467c9-7zlz2"] Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.031754 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:14 crc kubenswrapper[5000]: E0105 21:51:14.032268 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51a1013-b3ea-444a-b578-6cfc91b1c283" containerName="neutron-db-sync" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.032288 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51a1013-b3ea-444a-b578-6cfc91b1c283" containerName="neutron-db-sync" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.032502 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51a1013-b3ea-444a-b578-6cfc91b1c283" containerName="neutron-db-sync" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.033789 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.043784 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.127212 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.128716 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.132080 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.134515 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.135464 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.135541 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t67pj" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.148343 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209766 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209823 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209863 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209903 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffm7\" (UniqueName: \"kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209926 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.209954 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.210058 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.210302 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6f8\" (UniqueName: \"kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.210418 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.210480 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.212558 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.314613 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.314965 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6f8\" (UniqueName: \"kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.314986 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315006 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315032 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315086 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315105 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315130 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315149 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffm7\" (UniqueName: \"kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315163 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315185 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315847 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.315861 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.316396 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.316444 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.316470 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.319002 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.320310 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.321773 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.327366 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.333938 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6f8\" (UniqueName: \"kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8\") pod \"neutron-6fbbd8fdfb-jb8jh\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.337531 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffm7\" (UniqueName: \"kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7\") pod \"dnsmasq-dns-55f844cf75-dz9jp\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.383626 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:14 crc kubenswrapper[5000]: I0105 21:51:14.460850 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.332292 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b603c84e-b4e1-45e2-af6b-de4905867cf6" path="/var/lib/kubelet/pods/b603c84e-b4e1-45e2-af6b-de4905867cf6/volumes" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.332739 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e949189a-2b75-4081-949f-07ec69d377b5" path="/var/lib/kubelet/pods/e949189a-2b75-4081-949f-07ec69d377b5/volumes" Jan 05 21:51:15 crc kubenswrapper[5000]: E0105 21:51:15.650872 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 05 21:51:15 crc kubenswrapper[5000]: E0105 21:51:15.651043 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7zdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-prdrd_openstack(4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 21:51:15 crc kubenswrapper[5000]: E0105 21:51:15.652378 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-prdrd" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.709353 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.842052 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb\") pod \"bf40f774-440a-4644-9324-66f2c7d2647e\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.842156 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config\") pod \"bf40f774-440a-4644-9324-66f2c7d2647e\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.842211 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc\") pod \"bf40f774-440a-4644-9324-66f2c7d2647e\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.842276 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvd6j\" (UniqueName: \"kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j\") pod \"bf40f774-440a-4644-9324-66f2c7d2647e\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.842501 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb\") pod \"bf40f774-440a-4644-9324-66f2c7d2647e\" (UID: \"bf40f774-440a-4644-9324-66f2c7d2647e\") " Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.847559 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j" (OuterVolumeSpecName: "kube-api-access-cvd6j") pod "bf40f774-440a-4644-9324-66f2c7d2647e" (UID: "bf40f774-440a-4644-9324-66f2c7d2647e"). InnerVolumeSpecName "kube-api-access-cvd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.883568 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config" (OuterVolumeSpecName: "config") pod "bf40f774-440a-4644-9324-66f2c7d2647e" (UID: "bf40f774-440a-4644-9324-66f2c7d2647e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.888355 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf40f774-440a-4644-9324-66f2c7d2647e" (UID: "bf40f774-440a-4644-9324-66f2c7d2647e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.891767 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf40f774-440a-4644-9324-66f2c7d2647e" (UID: "bf40f774-440a-4644-9324-66f2c7d2647e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.893416 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf40f774-440a-4644-9324-66f2c7d2647e" (UID: "bf40f774-440a-4644-9324-66f2c7d2647e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.944695 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.944726 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.944757 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.944767 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf40f774-440a-4644-9324-66f2c7d2647e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.944776 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvd6j\" (UniqueName: \"kubernetes.io/projected/bf40f774-440a-4644-9324-66f2c7d2647e-kube-api-access-cvd6j\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.959804 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86bdcd58d9-pztv2"] Jan 05 21:51:15 crc kubenswrapper[5000]: E0105 21:51:15.960203 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.960223 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" Jan 05 21:51:15 crc kubenswrapper[5000]: E0105 21:51:15.960242 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="init" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.960249 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="init" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.960436 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.961361 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.967168 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.967430 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 05 21:51:15 crc kubenswrapper[5000]: I0105 21:51:15.969855 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86bdcd58d9-pztv2"] Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.046919 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-internal-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.047253 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pnr\" (UniqueName: \"kubernetes.io/projected/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-kube-api-access-95pnr\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.047404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-public-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.047587 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-httpd-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.047753 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-combined-ca-bundle\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.047855 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-ovndb-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.048016 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.149881 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-combined-ca-bundle\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151121 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151142 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-ovndb-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151257 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-internal-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151285 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pnr\" (UniqueName: \"kubernetes.io/projected/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-kube-api-access-95pnr\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151310 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-public-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.151367 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-httpd-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.153243 5000 scope.go:117] "RemoveContainer" containerID="b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.156848 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-internal-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.157366 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-httpd-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.158042 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-config\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.159280 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-ovndb-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.160675 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-combined-ca-bundle\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.162305 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-public-tls-certs\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.171102 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pnr\" (UniqueName: \"kubernetes.io/projected/43d9e1d2-3e87-4260-ba24-41e7cfbd4326-kube-api-access-95pnr\") pod \"neutron-86bdcd58d9-pztv2\" (UID: \"43d9e1d2-3e87-4260-ba24-41e7cfbd4326\") " pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: E0105 21:51:16.173434 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69\": container with ID starting with b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69 not found: ID does not exist" containerID="b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.173479 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69"} err="failed to get container status \"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69\": rpc error: code = NotFound desc = could not find container \"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69\": container with ID starting with b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69 not found: ID does not exist" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.173514 5000 scope.go:117] "RemoveContainer" containerID="5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" Jan 05 21:51:16 crc kubenswrapper[5000]: E0105 21:51:16.174602 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff\": container with ID starting with 5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff not found: ID does not exist" containerID="5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.174636 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff"} err="failed to get container status \"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff\": rpc error: code = NotFound desc = could not find container \"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff\": container with ID starting with 5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff not found: ID does not exist" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.174653 5000 scope.go:117] "RemoveContainer" containerID="b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.174919 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69"} err="failed to get container status \"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69\": rpc error: code = NotFound desc = could not find container \"b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69\": container with ID starting with b807921a776b2ef05922d2945c90c3c6de2c7f4a7440b208ee13a3f6a9143c69 not found: ID does not exist" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.174961 5000 scope.go:117] "RemoveContainer" containerID="5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.180497 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff"} err="failed to get container status \"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff\": rpc error: code = NotFound desc = could not find container \"5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff\": container with ID starting with 5ef705a9092fdb6929238978853dc112511b3ce91f41e0a0da2b614fea8a35ff not found: ID does not exist" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.180570 5000 scope.go:117] "RemoveContainer" containerID="e9fe46d20e48e523e7b9e9a72ec8e19912f5821530662c5140d033e9017183ef" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.314678 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.464025 5000 scope.go:117] "RemoveContainer" containerID="f62c1c02e74a9b63f41d6f4cb04984ea6a06f67416a3d26afef14e38a7909aa7" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.548360 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.555603 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-k8prf" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.558093 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-k8prf" event={"ID":"bf40f774-440a-4644-9324-66f2c7d2647e","Type":"ContainerDied","Data":"f942763052fce2e67aa2a01e043e84b8033569bdbb1ce7fe240277e011e32248"} Jan 05 21:51:16 crc kubenswrapper[5000]: E0105 21:51:16.571813 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-prdrd" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.624575 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.630338 5000 scope.go:117] "RemoveContainer" containerID="61f168b9c80051e15502e7c92859ec86a555a8087198db14f4888a76dc6d70dd" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.634143 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-k8prf"] Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.820373 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f48b4784d-5jgvr"] Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.845081 5000 scope.go:117] "RemoveContainer" containerID="0d5345eff85d28aeb3abd8d27303d2e0ea714549381c9db8454cf007de1f0cd7" Jan 05 21:51:16 crc kubenswrapper[5000]: I0105 21:51:16.969602 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.023049 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhpcs"] Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.039684 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.138807 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.155376 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:17 crc kubenswrapper[5000]: W0105 21:51:17.181133 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod035df708_e6ab_4ed5_9dc8_53f8e1da793b.slice/crio-02f179587957706ea974b6009955d1651df49906b3959e014438e7af876ea8e7 WatchSource:0}: Error finding container 02f179587957706ea974b6009955d1651df49906b3959e014438e7af876ea8e7: Status 404 returned error can't find the container with id 02f179587957706ea974b6009955d1651df49906b3959e014438e7af876ea8e7 Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.250693 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:17 crc kubenswrapper[5000]: W0105 21:51:17.266155 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c0a99dd_168d_4462_9aaf_aef2e16c9a0b.slice/crio-8ee75ad7ac9296cde9c2d87201cf67c3c3d2eb0751247a6c058508a7895c5d93 WatchSource:0}: Error finding container 8ee75ad7ac9296cde9c2d87201cf67c3c3d2eb0751247a6c058508a7895c5d93: Status 404 returned error can't find the container with id 8ee75ad7ac9296cde9c2d87201cf67c3c3d2eb0751247a6c058508a7895c5d93 Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.350168 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" path="/var/lib/kubelet/pods/bf40f774-440a-4644-9324-66f2c7d2647e/volumes" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.407248 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86bdcd58d9-pztv2"] Jan 05 21:51:17 crc kubenswrapper[5000]: W0105 21:51:17.411336 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43d9e1d2_3e87_4260_ba24_41e7cfbd4326.slice/crio-a1db16c0e1e78b4a33553ef9a089faefc4269d7ba923d7bc090a7d708f2cbbfd WatchSource:0}: Error finding container a1db16c0e1e78b4a33553ef9a089faefc4269d7ba923d7bc090a7d708f2cbbfd: Status 404 returned error can't find the container with id a1db16c0e1e78b4a33553ef9a089faefc4269d7ba923d7bc090a7d708f2cbbfd Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.611722 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerStarted","Data":"a8c8b079ed669aad435f15885d78ccf11ab351d19b2b62085a1b352870fb5d13"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.628857 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f48b4784d-5jgvr" event={"ID":"ed51a505-1c96-4f98-879e-75283649a949","Type":"ContainerStarted","Data":"66a850d7fb2881d19fe2f9fc925700939a961300c21f7c12c00507a85ab15ba9"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.628911 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f48b4784d-5jgvr" event={"ID":"ed51a505-1c96-4f98-879e-75283649a949","Type":"ContainerStarted","Data":"328e2c1adf503875d56b6295b5f0807b66c0e6551fbfde86011bebe7401b6909"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.628921 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f48b4784d-5jgvr" event={"ID":"ed51a505-1c96-4f98-879e-75283649a949","Type":"ContainerStarted","Data":"f3a0275b19f8e50132829737e9bedb0a752478b2f7b89e8966761e0b2b9dae37"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.644212 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" event={"ID":"035df708-e6ab-4ed5-9dc8-53f8e1da793b","Type":"ContainerStarted","Data":"02f179587957706ea974b6009955d1651df49906b3959e014438e7af876ea8e7"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.655099 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f48b4784d-5jgvr" podStartSLOduration=22.655076077 podStartE2EDuration="22.655076077s" podCreationTimestamp="2026-01-05 21:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:17.650362152 +0000 UTC m=+1032.606564631" watchObservedRunningTime="2026-01-05 21:51:17.655076077 +0000 UTC m=+1032.611278546" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.685327 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86bdcd58d9-pztv2" event={"ID":"43d9e1d2-3e87-4260-ba24-41e7cfbd4326","Type":"ContainerStarted","Data":"a1db16c0e1e78b4a33553ef9a089faefc4269d7ba923d7bc090a7d708f2cbbfd"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.691508 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhpcs" event={"ID":"024cd8c9-c0c9-4f2c-884b-e818c2a95133","Type":"ContainerStarted","Data":"b40da413314869d5c3919d9fe0fde2042e8de15106cae3e0fbbd80911738984e"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.691549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhpcs" event={"ID":"024cd8c9-c0c9-4f2c-884b-e818c2a95133","Type":"ContainerStarted","Data":"e6a0587f71fad1892ac65b5975f221626cd40729591d0a165a5cb18e7d538a1f"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.697092 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerStarted","Data":"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.697122 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerStarted","Data":"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.697210 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f9b6995df-77gt4" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon-log" containerID="cri-o://c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a" gracePeriod=30 Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.697372 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f9b6995df-77gt4" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon" containerID="cri-o://a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c" gracePeriod=30 Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.714305 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerStarted","Data":"8ee75ad7ac9296cde9c2d87201cf67c3c3d2eb0751247a6c058508a7895c5d93"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.725500 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerStarted","Data":"e7a6e620eecab8109d33efbaccb5eb31ad7b11a1c08ede39728959e5070cb633"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.726750 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nhpcs" podStartSLOduration=13.726731719 podStartE2EDuration="13.726731719s" podCreationTimestamp="2026-01-05 21:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:17.721698425 +0000 UTC m=+1032.677900894" watchObservedRunningTime="2026-01-05 21:51:17.726731719 +0000 UTC m=+1032.682934188" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.735822 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerStarted","Data":"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.735872 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerStarted","Data":"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.735885 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerStarted","Data":"c5e1e95e590028c083713ec1d7479b91b6cfc18a9604764c9af2a321c40a3b73"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.753108 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerStarted","Data":"a7cadab88b5de8850f67a89d7925f178a41768286d7319f66881b88cf9c3a281"} Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.767985 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.775235 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f9b6995df-77gt4" podStartSLOduration=7.780628987 podStartE2EDuration="31.77521481s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="2026-01-05 21:50:48.751871622 +0000 UTC m=+1003.708074091" lastFinishedPulling="2026-01-05 21:51:12.746457445 +0000 UTC m=+1027.702659914" observedRunningTime="2026-01-05 21:51:17.747403168 +0000 UTC m=+1032.703605637" watchObservedRunningTime="2026-01-05 21:51:17.77521481 +0000 UTC m=+1032.731417309" Jan 05 21:51:17 crc kubenswrapper[5000]: I0105 21:51:17.784610 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65d5455f76-k75ww" podStartSLOduration=22.783798675 podStartE2EDuration="22.783798675s" podCreationTimestamp="2026-01-05 21:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:17.773388588 +0000 UTC m=+1032.729591057" watchObservedRunningTime="2026-01-05 21:51:17.783798675 +0000 UTC m=+1032.740001144" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.810260 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86bdcd58d9-pztv2" event={"ID":"43d9e1d2-3e87-4260-ba24-41e7cfbd4326","Type":"ContainerStarted","Data":"fcb3e27a3498f2c2b4a057f59b7839307d4f903392f182de4e05ec0556dbe33f"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.811055 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86bdcd58d9-pztv2" event={"ID":"43d9e1d2-3e87-4260-ba24-41e7cfbd4326","Type":"ContainerStarted","Data":"4337c3c2a24925e08cf736e7ceed1639292769ccb2fe7ba846cd3c144796c34a"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.812672 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.828506 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dgtdq" event={"ID":"faf9d2c1-13d7-4475-a978-9b02ccb6374d","Type":"ContainerStarted","Data":"40841333e4b54fe6fc4f59ad43e255090115c78dc987071d469c8633bd7bfedf"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.864142 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerStarted","Data":"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.893494 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerStarted","Data":"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.893831 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerStarted","Data":"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.895332 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.932242 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86bdcd58d9-pztv2" podStartSLOduration=3.932221952 podStartE2EDuration="3.932221952s" podCreationTimestamp="2026-01-05 21:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:18.862555557 +0000 UTC m=+1033.818758026" watchObservedRunningTime="2026-01-05 21:51:18.932221952 +0000 UTC m=+1033.888424421" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.933790 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dgtdq" podStartSLOduration=3.65504356 podStartE2EDuration="32.933783896s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="2026-01-05 21:50:48.80549017 +0000 UTC m=+1003.761692639" lastFinishedPulling="2026-01-05 21:51:18.084230506 +0000 UTC m=+1033.040432975" observedRunningTime="2026-01-05 21:51:18.893307673 +0000 UTC m=+1033.849510152" watchObservedRunningTime="2026-01-05 21:51:18.933783896 +0000 UTC m=+1033.889986365" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.940964 5000 generic.go:334] "Generic (PLEG): container finished" podID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerID="348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6" exitCode=0 Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.941023 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" event={"ID":"035df708-e6ab-4ed5-9dc8-53f8e1da793b","Type":"ContainerDied","Data":"348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6"} Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.958155 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fbbd8fdfb-jb8jh" podStartSLOduration=4.95813671 podStartE2EDuration="4.95813671s" podCreationTimestamp="2026-01-05 21:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:18.930425381 +0000 UTC m=+1033.886627850" watchObservedRunningTime="2026-01-05 21:51:18.95813671 +0000 UTC m=+1033.914339179" Jan 05 21:51:18 crc kubenswrapper[5000]: I0105 21:51:18.984850 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerStarted","Data":"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878"} Jan 05 21:51:19 crc kubenswrapper[5000]: I0105 21:51:19.251823 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-k8prf" podUID="bf40f774-440a-4644-9324-66f2c7d2647e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 05 21:51:19 crc kubenswrapper[5000]: I0105 21:51:19.995948 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerStarted","Data":"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317"} Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.023848 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" event={"ID":"035df708-e6ab-4ed5-9dc8-53f8e1da793b","Type":"ContainerStarted","Data":"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c"} Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.024033 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.026802 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-log" containerID="cri-o://8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" gracePeriod=30 Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.027027 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerStarted","Data":"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21"} Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.027388 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.02737128 podStartE2EDuration="15.02737128s" podCreationTimestamp="2026-01-05 21:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:20.024287032 +0000 UTC m=+1034.980489501" watchObservedRunningTime="2026-01-05 21:51:20.02737128 +0000 UTC m=+1034.983573749" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.027479 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-httpd" containerID="cri-o://b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" gracePeriod=30 Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.052863 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" podStartSLOduration=6.052846416 podStartE2EDuration="6.052846416s" podCreationTimestamp="2026-01-05 21:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:20.046544696 +0000 UTC m=+1035.002747165" watchObservedRunningTime="2026-01-05 21:51:20.052846416 +0000 UTC m=+1035.009048885" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.075260 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.075242274 podStartE2EDuration="28.075242274s" podCreationTimestamp="2026-01-05 21:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:20.065044963 +0000 UTC m=+1035.021247432" watchObservedRunningTime="2026-01-05 21:51:20.075242274 +0000 UTC m=+1035.031444743" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.747102 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841554 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841627 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841678 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841784 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841811 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.841867 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7ltt\" (UniqueName: \"kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.842545 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.842594 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1d726b11-25d7-4065-9097-5d61acac1fc6\" (UID: \"1d726b11-25d7-4065-9097-5d61acac1fc6\") " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.842961 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs" (OuterVolumeSpecName: "logs") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.843278 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.843434 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.843461 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d726b11-25d7-4065-9097-5d61acac1fc6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.854043 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.854201 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt" (OuterVolumeSpecName: "kube-api-access-m7ltt") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "kube-api-access-m7ltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.862049 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts" (OuterVolumeSpecName: "scripts") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.874008 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.898379 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.968090 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.968144 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7ltt\" (UniqueName: \"kubernetes.io/projected/1d726b11-25d7-4065-9097-5d61acac1fc6-kube-api-access-m7ltt\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.968154 5000 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.968193 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.968204 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.986251 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 05 21:51:20 crc kubenswrapper[5000]: I0105 21:51:20.999607 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data" (OuterVolumeSpecName: "config-data") pod "1d726b11-25d7-4065-9097-5d61acac1fc6" (UID: "1d726b11-25d7-4065-9097-5d61acac1fc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.038554 5000 generic.go:334] "Generic (PLEG): container finished" podID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerID="b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" exitCode=0 Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.038591 5000 generic.go:334] "Generic (PLEG): container finished" podID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerID="8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" exitCode=143 Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.039713 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.051546 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerDied","Data":"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21"} Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.051609 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerDied","Data":"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878"} Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.051629 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1d726b11-25d7-4065-9097-5d61acac1fc6","Type":"ContainerDied","Data":"a7cadab88b5de8850f67a89d7925f178a41768286d7319f66881b88cf9c3a281"} Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.051646 5000 scope.go:117] "RemoveContainer" containerID="b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.070339 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d726b11-25d7-4065-9097-5d61acac1fc6-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.070692 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.110782 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.123948 5000 scope.go:117] "RemoveContainer" containerID="8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.124356 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.141125 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:21 crc kubenswrapper[5000]: E0105 21:51:21.141512 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-httpd" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.141527 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-httpd" Jan 05 21:51:21 crc kubenswrapper[5000]: E0105 21:51:21.141559 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-log" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.141566 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-log" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.142925 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-httpd" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.142963 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" containerName="glance-log" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.143997 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.148508 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.148669 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.154258 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.164102 5000 scope.go:117] "RemoveContainer" containerID="b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" Jan 05 21:51:21 crc kubenswrapper[5000]: E0105 21:51:21.164530 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21\": container with ID starting with b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21 not found: ID does not exist" containerID="b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.164574 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21"} err="failed to get container status \"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21\": rpc error: code = NotFound desc = could not find container \"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21\": container with ID starting with b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21 not found: ID does not exist" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.164608 5000 scope.go:117] "RemoveContainer" containerID="8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" Jan 05 21:51:21 crc kubenswrapper[5000]: E0105 21:51:21.168777 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878\": container with ID starting with 8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878 not found: ID does not exist" containerID="8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.168825 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878"} err="failed to get container status \"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878\": rpc error: code = NotFound desc = could not find container \"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878\": container with ID starting with 8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878 not found: ID does not exist" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.168855 5000 scope.go:117] "RemoveContainer" containerID="b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.169580 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21"} err="failed to get container status \"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21\": rpc error: code = NotFound desc = could not find container \"b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21\": container with ID starting with b0044820750218ce3c75eff21ca1de86a1462a49e17569dc65fe6723d4233c21 not found: ID does not exist" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.169640 5000 scope.go:117] "RemoveContainer" containerID="8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.170149 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878"} err="failed to get container status \"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878\": rpc error: code = NotFound desc = could not find container \"8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878\": container with ID starting with 8530519d6ad8cf0a97eea3fcc0a728fa1fbe8a45dcaee5a28075b1b9527c8878 not found: ID does not exist" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275319 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275398 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6nsl\" (UniqueName: \"kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275437 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275493 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275591 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275628 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275660 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.275697 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.338658 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d726b11-25d7-4065-9097-5d61acac1fc6" path="/var/lib/kubelet/pods/1d726b11-25d7-4065-9097-5d61acac1fc6/volumes" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384521 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384587 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384638 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384687 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384760 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6nsl\" (UniqueName: \"kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384789 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.384860 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.385015 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.385405 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.385593 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.385649 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.390429 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.390937 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.391482 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.392724 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.404408 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6nsl\" (UniqueName: \"kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.414547 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " pod="openstack/glance-default-external-api-0" Jan 05 21:51:21 crc kubenswrapper[5000]: I0105 21:51:21.474521 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:51:22 crc kubenswrapper[5000]: I0105 21:51:22.049677 5000 generic.go:334] "Generic (PLEG): container finished" podID="024cd8c9-c0c9-4f2c-884b-e818c2a95133" containerID="b40da413314869d5c3919d9fe0fde2042e8de15106cae3e0fbbd80911738984e" exitCode=0 Jan 05 21:51:22 crc kubenswrapper[5000]: I0105 21:51:22.049762 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhpcs" event={"ID":"024cd8c9-c0c9-4f2c-884b-e818c2a95133","Type":"ContainerDied","Data":"b40da413314869d5c3919d9fe0fde2042e8de15106cae3e0fbbd80911738984e"} Jan 05 21:51:23 crc kubenswrapper[5000]: I0105 21:51:23.059450 5000 generic.go:334] "Generic (PLEG): container finished" podID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" containerID="40841333e4b54fe6fc4f59ad43e255090115c78dc987071d469c8633bd7bfedf" exitCode=0 Jan 05 21:51:23 crc kubenswrapper[5000]: I0105 21:51:23.059676 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dgtdq" event={"ID":"faf9d2c1-13d7-4475-a978-9b02ccb6374d","Type":"ContainerDied","Data":"40841333e4b54fe6fc4f59ad43e255090115c78dc987071d469c8633bd7bfedf"} Jan 05 21:51:24 crc kubenswrapper[5000]: I0105 21:51:24.174736 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:51:24 crc kubenswrapper[5000]: I0105 21:51:24.387129 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:24 crc kubenswrapper[5000]: I0105 21:51:24.440911 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:51:24 crc kubenswrapper[5000]: I0105 21:51:24.441184 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="dnsmasq-dns" containerID="cri-o://c7231c3da6807b778877dbde450a8270eabdada9d4c18b0fa7e341a8cbba7637" gracePeriod=10 Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.078594 5000 generic.go:334] "Generic (PLEG): container finished" podID="f6024769-eb72-4852-9278-e86730c00512" containerID="c7231c3da6807b778877dbde450a8270eabdada9d4c18b0fa7e341a8cbba7637" exitCode=0 Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.078636 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" event={"ID":"f6024769-eb72-4852-9278-e86730c00512","Type":"ContainerDied","Data":"c7231c3da6807b778877dbde450a8270eabdada9d4c18b0fa7e341a8cbba7637"} Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.725021 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.725117 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.884870 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:51:25 crc kubenswrapper[5000]: I0105 21:51:25.884939 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.133450 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.136253 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.165238 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.177150 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:26 crc kubenswrapper[5000]: W0105 21:51:26.657103 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda784c52f_445a_4e50_8e93_3197d01b0f01.slice/crio-389feab859c1c14907d5dae3dd4a5aeda269ac039807e1e75027dff69e1a1b06 WatchSource:0}: Error finding container 389feab859c1c14907d5dae3dd4a5aeda269ac039807e1e75027dff69e1a1b06: Status 404 returned error can't find the container with id 389feab859c1c14907d5dae3dd4a5aeda269ac039807e1e75027dff69e1a1b06 Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.876324 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.887839 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dgtdq" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976537 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle\") pod \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976577 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g2s4\" (UniqueName: \"kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4\") pod \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976609 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb8jh\" (UniqueName: \"kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976658 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976681 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976714 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976801 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976822 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data\") pod \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\" (UID: \"024cd8c9-c0c9-4f2c-884b-e818c2a95133\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976855 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data\") pod \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976870 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts\") pod \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.976908 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs\") pod \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\" (UID: \"faf9d2c1-13d7-4475-a978-9b02ccb6374d\") " Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.977924 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs" (OuterVolumeSpecName: "logs") pod "faf9d2c1-13d7-4475-a978-9b02ccb6374d" (UID: "faf9d2c1-13d7-4475-a978-9b02ccb6374d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.985780 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.986413 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts" (OuterVolumeSpecName: "scripts") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.989289 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh" (OuterVolumeSpecName: "kube-api-access-gb8jh") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "kube-api-access-gb8jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:26 crc kubenswrapper[5000]: I0105 21:51:26.991908 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts" (OuterVolumeSpecName: "scripts") pod "faf9d2c1-13d7-4475-a978-9b02ccb6374d" (UID: "faf9d2c1-13d7-4475-a978-9b02ccb6374d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:26.999103 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4" (OuterVolumeSpecName: "kube-api-access-9g2s4") pod "faf9d2c1-13d7-4475-a978-9b02ccb6374d" (UID: "faf9d2c1-13d7-4475-a978-9b02ccb6374d"). InnerVolumeSpecName "kube-api-access-9g2s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.003862 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.024257 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078728 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078749 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf9d2c1-13d7-4475-a978-9b02ccb6374d-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078759 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g2s4\" (UniqueName: \"kubernetes.io/projected/faf9d2c1-13d7-4475-a978-9b02ccb6374d-kube-api-access-9g2s4\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078769 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb8jh\" (UniqueName: \"kubernetes.io/projected/024cd8c9-c0c9-4f2c-884b-e818c2a95133-kube-api-access-gb8jh\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078777 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078785 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078793 5000 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.078800 5000 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.080552 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data" (OuterVolumeSpecName: "config-data") pod "024cd8c9-c0c9-4f2c-884b-e818c2a95133" (UID: "024cd8c9-c0c9-4f2c-884b-e818c2a95133"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.083983 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faf9d2c1-13d7-4475-a978-9b02ccb6374d" (UID: "faf9d2c1-13d7-4475-a978-9b02ccb6374d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.087480 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data" (OuterVolumeSpecName: "config-data") pod "faf9d2c1-13d7-4475-a978-9b02ccb6374d" (UID: "faf9d2c1-13d7-4475-a978-9b02ccb6374d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.099601 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.160358 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhpcs" event={"ID":"024cd8c9-c0c9-4f2c-884b-e818c2a95133","Type":"ContainerDied","Data":"e6a0587f71fad1892ac65b5975f221626cd40729591d0a165a5cb18e7d538a1f"} Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.160400 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a0587f71fad1892ac65b5975f221626cd40729591d0a165a5cb18e7d538a1f" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.160462 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhpcs" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179154 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179237 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179254 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179314 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlcqj\" (UniqueName: \"kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179339 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179369 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb\") pod \"f6024769-eb72-4852-9278-e86730c00512\" (UID: \"f6024769-eb72-4852-9278-e86730c00512\") " Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179589 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/024cd8c9-c0c9-4f2c-884b-e818c2a95133-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179599 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.179841 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf9d2c1-13d7-4475-a978-9b02ccb6374d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.187606 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj" (OuterVolumeSpecName: "kube-api-access-xlcqj") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "kube-api-access-xlcqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.195339 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" event={"ID":"f6024769-eb72-4852-9278-e86730c00512","Type":"ContainerDied","Data":"f8a31bb0eb95d427650a1f7bb99aa89eb1fe3f40766c1b6e8bec16fc85a525f4"} Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.195396 5000 scope.go:117] "RemoveContainer" containerID="c7231c3da6807b778877dbde450a8270eabdada9d4c18b0fa7e341a8cbba7637" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.195533 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gwm8h" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.211462 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerStarted","Data":"389feab859c1c14907d5dae3dd4a5aeda269ac039807e1e75027dff69e1a1b06"} Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.217909 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dgtdq" event={"ID":"faf9d2c1-13d7-4475-a978-9b02ccb6374d","Type":"ContainerDied","Data":"b55eeb48473b3d1059c53e6d0b67386e441359e5d9077d13878ad9117b81269d"} Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.217950 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b55eeb48473b3d1059c53e6d0b67386e441359e5d9077d13878ad9117b81269d" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.217970 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.218137 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.218373 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dgtdq" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.231657 5000 scope.go:117] "RemoveContainer" containerID="030da89c0f42622ed60d330e21c5cf4f5b0acac8b4053fbc3422cbd0c8ff5071" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.284018 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlcqj\" (UniqueName: \"kubernetes.io/projected/f6024769-eb72-4852-9278-e86730c00512-kube-api-access-xlcqj\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.304670 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config" (OuterVolumeSpecName: "config") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.313461 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.322249 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.331302 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.352602 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6024769-eb72-4852-9278-e86730c00512" (UID: "f6024769-eb72-4852-9278-e86730c00512"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.387847 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.388057 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.388135 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.388188 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.388244 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6024769-eb72-4852-9278-e86730c00512-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.532744 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.548039 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gwm8h"] Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.985797 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c8579bfdd-r7vxj"] Jan 05 21:51:27 crc kubenswrapper[5000]: E0105 21:51:27.986427 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="dnsmasq-dns" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986440 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="dnsmasq-dns" Jan 05 21:51:27 crc kubenswrapper[5000]: E0105 21:51:27.986459 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="init" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986465 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="init" Jan 05 21:51:27 crc kubenswrapper[5000]: E0105 21:51:27.986474 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" containerName="placement-db-sync" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986481 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" containerName="placement-db-sync" Jan 05 21:51:27 crc kubenswrapper[5000]: E0105 21:51:27.986502 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024cd8c9-c0c9-4f2c-884b-e818c2a95133" containerName="keystone-bootstrap" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986508 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="024cd8c9-c0c9-4f2c-884b-e818c2a95133" containerName="keystone-bootstrap" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986666 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" containerName="placement-db-sync" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986687 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="024cd8c9-c0c9-4f2c-884b-e818c2a95133" containerName="keystone-bootstrap" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.986707 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6024769-eb72-4852-9278-e86730c00512" containerName="dnsmasq-dns" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.987316 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.992023 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.992362 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.992502 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.992664 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.993226 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zcmgb" Jan 05 21:51:27 crc kubenswrapper[5000]: I0105 21:51:27.993368 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.016360 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c8579bfdd-r7vxj"] Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.085709 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-859855f89d-t6p2g"] Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.088335 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.091466 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-67b6v" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.091733 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.092055 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.092522 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.092851 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104648 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-credential-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104708 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-combined-ca-bundle\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104732 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-public-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104755 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-scripts\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104800 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-config-data\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.104993 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-fernet-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.105034 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-internal-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.105326 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmwp\" (UniqueName: \"kubernetes.io/projected/edc2dca8-56cc-43b6-b35d-18b84ff237d3-kube-api-access-lvmwp\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.115664 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-859855f89d-t6p2g"] Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.208724 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-fernet-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.208775 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-internal-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.208837 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-config-data\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.208935 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-internal-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.208984 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-public-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209011 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-combined-ca-bundle\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209073 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmwp\" (UniqueName: \"kubernetes.io/projected/edc2dca8-56cc-43b6-b35d-18b84ff237d3-kube-api-access-lvmwp\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209108 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-credential-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209139 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-scripts\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209193 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-combined-ca-bundle\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209218 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-public-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209256 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjsm\" (UniqueName: \"kubernetes.io/projected/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-kube-api-access-lvjsm\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209284 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-scripts\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209307 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-logs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.209401 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-config-data\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.224874 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-fernet-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.224879 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-public-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.224872 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-credential-keys\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.225082 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-internal-tls-certs\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.227123 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-config-data\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.239309 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmwp\" (UniqueName: \"kubernetes.io/projected/edc2dca8-56cc-43b6-b35d-18b84ff237d3-kube-api-access-lvmwp\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.243316 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-scripts\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.252056 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc2dca8-56cc-43b6-b35d-18b84ff237d3-combined-ca-bundle\") pod \"keystone-6c8579bfdd-r7vxj\" (UID: \"edc2dca8-56cc-43b6-b35d-18b84ff237d3\") " pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.287686 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-65jnl" event={"ID":"ce305106-1701-4e2e-b87a-fc358e9c99d2","Type":"ContainerStarted","Data":"8ec434cc706954296e05fd728532e26a869367de103707bea5a589d46cb52d25"} Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.303150 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerStarted","Data":"6c3d93618a51e9b4bda0c46cb1a773e10adf685afcae798cc74db5624e5ca8cc"} Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.303383 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerStarted","Data":"0862ebb35357628e4f45ec8191b9d13ac2aa66d190788e495228e650f091c797"} Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.308606 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-65jnl" podStartSLOduration=4.064728645 podStartE2EDuration="42.30859338s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="2026-01-05 21:50:48.780511068 +0000 UTC m=+1003.736713537" lastFinishedPulling="2026-01-05 21:51:27.024375803 +0000 UTC m=+1041.980578272" observedRunningTime="2026-01-05 21:51:28.305543753 +0000 UTC m=+1043.261746222" watchObservedRunningTime="2026-01-05 21:51:28.30859338 +0000 UTC m=+1043.264795849" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.310846 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-combined-ca-bundle\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.312817 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.320857 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-scripts\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.321007 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjsm\" (UniqueName: \"kubernetes.io/projected/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-kube-api-access-lvjsm\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.321049 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-logs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.321262 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-config-data\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.321342 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-internal-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.321419 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-public-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.322213 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-combined-ca-bundle\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.322578 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-logs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.326042 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerStarted","Data":"28226b1d0d41ad62538f1f8c07ede3fe51fe633fad414b174aa289b8ed538265"} Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.339935 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-internal-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.341282 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-public-tls-certs\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.341671 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-config-data\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.351254 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-scripts\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.358615 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.358593475 podStartE2EDuration="7.358593475s" podCreationTimestamp="2026-01-05 21:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:28.341541309 +0000 UTC m=+1043.297743788" watchObservedRunningTime="2026-01-05 21:51:28.358593475 +0000 UTC m=+1043.314795944" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.384439 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjsm\" (UniqueName: \"kubernetes.io/projected/1aa85c76-2f7d-4716-bd4c-4f6f53b75d01-kube-api-access-lvjsm\") pod \"placement-859855f89d-t6p2g\" (UID: \"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01\") " pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.440276 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:28 crc kubenswrapper[5000]: I0105 21:51:28.814947 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c8579bfdd-r7vxj"] Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.026305 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-859855f89d-t6p2g"] Jan 05 21:51:29 crc kubenswrapper[5000]: W0105 21:51:29.042011 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa85c76_2f7d_4716_bd4c_4f6f53b75d01.slice/crio-c4c889120b2b422b34e68b0150751b7bc772716dd1d3c772b25087db1aa0dde5 WatchSource:0}: Error finding container c4c889120b2b422b34e68b0150751b7bc772716dd1d3c772b25087db1aa0dde5: Status 404 returned error can't find the container with id c4c889120b2b422b34e68b0150751b7bc772716dd1d3c772b25087db1aa0dde5 Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.334288 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6024769-eb72-4852-9278-e86730c00512" path="/var/lib/kubelet/pods/f6024769-eb72-4852-9278-e86730c00512/volumes" Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.336662 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-859855f89d-t6p2g" event={"ID":"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01","Type":"ContainerStarted","Data":"754c9a7309be49c27029d950e04d41d3e9ee5371bc321bc2d7c2c8ad6dd30477"} Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.336713 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-859855f89d-t6p2g" event={"ID":"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01","Type":"ContainerStarted","Data":"c4c889120b2b422b34e68b0150751b7bc772716dd1d3c772b25087db1aa0dde5"} Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.339945 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c8579bfdd-r7vxj" event={"ID":"edc2dca8-56cc-43b6-b35d-18b84ff237d3","Type":"ContainerStarted","Data":"8767f21b43d27eda57f90c92284f4d510d176374d0863c93d885ba6af5d085e7"} Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.339970 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c8579bfdd-r7vxj" event={"ID":"edc2dca8-56cc-43b6-b35d-18b84ff237d3","Type":"ContainerStarted","Data":"bf07e690bc2492eefd9e0b93029e25a96b4be251ce4d99b13f73ea52d7c5d10a"} Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.339994 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.362210 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6c8579bfdd-r7vxj" podStartSLOduration=2.362183244 podStartE2EDuration="2.362183244s" podCreationTimestamp="2026-01-05 21:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:29.356119412 +0000 UTC m=+1044.312321891" watchObservedRunningTime="2026-01-05 21:51:29.362183244 +0000 UTC m=+1044.318385713" Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.850313 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:29 crc kubenswrapper[5000]: I0105 21:51:29.850642 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 21:51:30 crc kubenswrapper[5000]: I0105 21:51:30.349305 5000 generic.go:334] "Generic (PLEG): container finished" podID="ce305106-1701-4e2e-b87a-fc358e9c99d2" containerID="8ec434cc706954296e05fd728532e26a869367de103707bea5a589d46cb52d25" exitCode=0 Jan 05 21:51:30 crc kubenswrapper[5000]: I0105 21:51:30.349370 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-65jnl" event={"ID":"ce305106-1701-4e2e-b87a-fc358e9c99d2","Type":"ContainerDied","Data":"8ec434cc706954296e05fd728532e26a869367de103707bea5a589d46cb52d25"} Jan 05 21:51:30 crc kubenswrapper[5000]: I0105 21:51:30.353932 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-859855f89d-t6p2g" event={"ID":"1aa85c76-2f7d-4716-bd4c-4f6f53b75d01","Type":"ContainerStarted","Data":"440118927b0a9cf933cf63cb2490209c7a6eb4a8fb40567fb0580c50115dc448"} Jan 05 21:51:30 crc kubenswrapper[5000]: I0105 21:51:30.354127 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:30 crc kubenswrapper[5000]: I0105 21:51:30.387962 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-859855f89d-t6p2g" podStartSLOduration=2.387945755 podStartE2EDuration="2.387945755s" podCreationTimestamp="2026-01-05 21:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:30.383436956 +0000 UTC m=+1045.339639425" watchObservedRunningTime="2026-01-05 21:51:30.387945755 +0000 UTC m=+1045.344148224" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.366781 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.475146 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.475194 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.524842 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.546044 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.753145 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-65jnl" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.892584 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data\") pod \"ce305106-1701-4e2e-b87a-fc358e9c99d2\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.892627 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfqf\" (UniqueName: \"kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf\") pod \"ce305106-1701-4e2e-b87a-fc358e9c99d2\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.892739 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle\") pod \"ce305106-1701-4e2e-b87a-fc358e9c99d2\" (UID: \"ce305106-1701-4e2e-b87a-fc358e9c99d2\") " Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.895686 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ce305106-1701-4e2e-b87a-fc358e9c99d2" (UID: "ce305106-1701-4e2e-b87a-fc358e9c99d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.896703 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf" (OuterVolumeSpecName: "kube-api-access-qzfqf") pod "ce305106-1701-4e2e-b87a-fc358e9c99d2" (UID: "ce305106-1701-4e2e-b87a-fc358e9c99d2"). InnerVolumeSpecName "kube-api-access-qzfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.920772 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce305106-1701-4e2e-b87a-fc358e9c99d2" (UID: "ce305106-1701-4e2e-b87a-fc358e9c99d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.994354 5000 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.994379 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfqf\" (UniqueName: \"kubernetes.io/projected/ce305106-1701-4e2e-b87a-fc358e9c99d2-kube-api-access-qzfqf\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:31 crc kubenswrapper[5000]: I0105 21:51:31.994391 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce305106-1701-4e2e-b87a-fc358e9c99d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.377653 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-65jnl" event={"ID":"ce305106-1701-4e2e-b87a-fc358e9c99d2","Type":"ContainerDied","Data":"0b82287c0d53e511fbaa98a38b3dc419424d8a02f94b567f69ec07df40cdc73a"} Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.377703 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b82287c0d53e511fbaa98a38b3dc419424d8a02f94b567f69ec07df40cdc73a" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.377668 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-65jnl" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.379262 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-prdrd" event={"ID":"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00","Type":"ContainerStarted","Data":"e9878eec8a7e6ee4dd26f7f70b81e1fa8913d0561e1def3e2f1a80e441fe135c"} Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.379687 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.379732 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.406875 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-prdrd" podStartSLOduration=2.834699901 podStartE2EDuration="46.406854708s" podCreationTimestamp="2026-01-05 21:50:46 +0000 UTC" firstStartedPulling="2026-01-05 21:50:48.215352882 +0000 UTC m=+1003.171555351" lastFinishedPulling="2026-01-05 21:51:31.787507689 +0000 UTC m=+1046.743710158" observedRunningTime="2026-01-05 21:51:32.395313349 +0000 UTC m=+1047.351515828" watchObservedRunningTime="2026-01-05 21:51:32.406854708 +0000 UTC m=+1047.363057177" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.622355 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-b6686bbd5-nnkl5"] Jan 05 21:51:32 crc kubenswrapper[5000]: E0105 21:51:32.622694 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" containerName="barbican-db-sync" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.622706 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" containerName="barbican-db-sync" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.622883 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" containerName="barbican-db-sync" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.623729 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.653615 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.653831 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j8sxz" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.654577 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.664178 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b6686bbd5-nnkl5"] Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.711629 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data-custom\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.711830 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbj7\" (UniqueName: \"kubernetes.io/projected/b4c4d270-9b90-47d9-b076-feac4ab48232-kube-api-access-5cbj7\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.711946 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c4d270-9b90-47d9-b076-feac4ab48232-logs\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.712051 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-combined-ca-bundle\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.712160 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.756408 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b7c959586-6rv2n"] Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.773959 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.791237 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.792050 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b7c959586-6rv2n"] Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.807933 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.810206 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814080 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c4d270-9b90-47d9-b076-feac4ab48232-logs\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814150 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-combined-ca-bundle\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814233 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814270 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0b4eb9-6ea0-470c-b684-35945245161c-logs\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814321 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814354 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zdx\" (UniqueName: \"kubernetes.io/projected/dc0b4eb9-6ea0-470c-b684-35945245161c-kube-api-access-m6zdx\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814388 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data-custom\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814465 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814495 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data-custom\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.814521 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbj7\" (UniqueName: \"kubernetes.io/projected/b4c4d270-9b90-47d9-b076-feac4ab48232-kube-api-access-5cbj7\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.815297 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c4d270-9b90-47d9-b076-feac4ab48232-logs\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.829454 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.837183 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-config-data-custom\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.839818 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c4d270-9b90-47d9-b076-feac4ab48232-combined-ca-bundle\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.851658 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.861814 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbj7\" (UniqueName: \"kubernetes.io/projected/b4c4d270-9b90-47d9-b076-feac4ab48232-kube-api-access-5cbj7\") pod \"barbican-worker-b6686bbd5-nnkl5\" (UID: \"b4c4d270-9b90-47d9-b076-feac4ab48232\") " pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915712 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0b4eb9-6ea0-470c-b684-35945245161c-logs\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915763 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915801 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915817 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5f6b\" (UniqueName: \"kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915840 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zdx\" (UniqueName: \"kubernetes.io/projected/dc0b4eb9-6ea0-470c-b684-35945245161c-kube-api-access-m6zdx\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915862 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915890 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data-custom\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915929 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915958 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.915983 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.916021 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.916415 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0b4eb9-6ea0-470c-b684-35945245161c-logs\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.926836 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.938178 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-config-data-custom\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.938712 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0b4eb9-6ea0-470c-b684-35945245161c-combined-ca-bundle\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:32 crc kubenswrapper[5000]: I0105 21:51:32.975630 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zdx\" (UniqueName: \"kubernetes.io/projected/dc0b4eb9-6ea0-470c-b684-35945245161c-kube-api-access-m6zdx\") pod \"barbican-keystone-listener-7b7c959586-6rv2n\" (UID: \"dc0b4eb9-6ea0-470c-b684-35945245161c\") " pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.041804 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.041865 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5f6b\" (UniqueName: \"kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.041895 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.041953 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.041987 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.042046 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.043191 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.043492 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.043771 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.044229 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.045434 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.070558 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b6686bbd5-nnkl5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.108684 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.111229 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.116274 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5f6b\" (UniqueName: \"kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b\") pod \"dnsmasq-dns-85ff748b95-zf8jj\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.116617 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.119659 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.120249 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.144422 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.144521 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7glp\" (UniqueName: \"kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.144592 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.144639 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.144706 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.247199 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.247359 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.247455 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.247615 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7glp\" (UniqueName: \"kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.247753 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.249206 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.256125 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.257332 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.261391 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.266454 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7glp\" (UniqueName: \"kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp\") pod \"barbican-api-58dd4b4f4d-j4qq5\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.339387 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:33 crc kubenswrapper[5000]: I0105 21:51:33.446095 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.237167 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.237618 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.240244 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.725820 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.866687 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59df95cbb-xkgb8"] Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.868683 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.872084 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.872321 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.885890 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59df95cbb-xkgb8"] Jan 05 21:51:35 crc kubenswrapper[5000]: I0105 21:51:35.891563 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f48b4784d-5jgvr" podUID="ed51a505-1c96-4f98-879e-75283649a949" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.006620 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd1efe56-77b9-43ee-9c00-563a30e3d948-logs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.006662 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-combined-ca-bundle\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.006706 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-internal-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.006973 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data-custom\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.007070 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgv9\" (UniqueName: \"kubernetes.io/projected/bd1efe56-77b9-43ee-9c00-563a30e3d948-kube-api-access-dlgv9\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.007174 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-public-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.007214 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108588 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-public-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108627 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108693 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd1efe56-77b9-43ee-9c00-563a30e3d948-logs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108713 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-combined-ca-bundle\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108756 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-internal-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108827 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data-custom\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.108861 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgv9\" (UniqueName: \"kubernetes.io/projected/bd1efe56-77b9-43ee-9c00-563a30e3d948-kube-api-access-dlgv9\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.109319 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd1efe56-77b9-43ee-9c00-563a30e3d948-logs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.115302 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data-custom\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.115510 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-public-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.116699 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-internal-tls-certs\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.117474 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-combined-ca-bundle\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.119970 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1efe56-77b9-43ee-9c00-563a30e3d948-config-data\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.131255 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgv9\" (UniqueName: \"kubernetes.io/projected/bd1efe56-77b9-43ee-9c00-563a30e3d948-kube-api-access-dlgv9\") pod \"barbican-api-59df95cbb-xkgb8\" (UID: \"bd1efe56-77b9-43ee-9c00-563a30e3d948\") " pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:36 crc kubenswrapper[5000]: I0105 21:51:36.217877 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:38 crc kubenswrapper[5000]: E0105 21:51:38.417969 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.437652 5000 generic.go:334] "Generic (PLEG): container finished" podID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" containerID="e9878eec8a7e6ee4dd26f7f70b81e1fa8913d0561e1def3e2f1a80e441fe135c" exitCode=0 Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.437732 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-prdrd" event={"ID":"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00","Type":"ContainerDied","Data":"e9878eec8a7e6ee4dd26f7f70b81e1fa8913d0561e1def3e2f1a80e441fe135c"} Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.445482 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerStarted","Data":"8a08e6c6322151b56e75d3d98f89baddbc9e7db12cadfeee86ff569c7f8a2fd4"} Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.445705 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="ceilometer-notification-agent" containerID="cri-o://a8c8b079ed669aad435f15885d78ccf11ab351d19b2b62085a1b352870fb5d13" gracePeriod=30 Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.445835 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.445928 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="proxy-httpd" containerID="cri-o://8a08e6c6322151b56e75d3d98f89baddbc9e7db12cadfeee86ff569c7f8a2fd4" gracePeriod=30 Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.445988 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="sg-core" containerID="cri-o://28226b1d0d41ad62538f1f8c07ede3fe51fe633fad414b174aa289b8ed538265" gracePeriod=30 Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.533555 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b7c959586-6rv2n"] Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.749335 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59df95cbb-xkgb8"] Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.830773 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.862438 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:38 crc kubenswrapper[5000]: W0105 21:51:38.925123 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc114a40_a8fc_4199_bc0d_1044317b3e1e.slice/crio-997a35f3d168db1cfa1784ff0fd2a96a6992e786b65a41c4cdbdbc24ae768c67 WatchSource:0}: Error finding container 997a35f3d168db1cfa1784ff0fd2a96a6992e786b65a41c4cdbdbc24ae768c67: Status 404 returned error can't find the container with id 997a35f3d168db1cfa1784ff0fd2a96a6992e786b65a41c4cdbdbc24ae768c67 Jan 05 21:51:38 crc kubenswrapper[5000]: I0105 21:51:38.938785 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b6686bbd5-nnkl5"] Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.462421 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59df95cbb-xkgb8" event={"ID":"bd1efe56-77b9-43ee-9c00-563a30e3d948","Type":"ContainerStarted","Data":"4dd8550285e00be0277154562817527339daa3ea0041b9bef4ba6efaccae9ae8"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.462464 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59df95cbb-xkgb8" event={"ID":"bd1efe56-77b9-43ee-9c00-563a30e3d948","Type":"ContainerStarted","Data":"02895fffb3f0dcef616bf4e1c5516f659879caa861c3fdf0a24981f50167e73f"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.462474 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59df95cbb-xkgb8" event={"ID":"bd1efe56-77b9-43ee-9c00-563a30e3d948","Type":"ContainerStarted","Data":"a4019d379bc1b2faa584ef457449956bad0f7896824eebf63d362128b7e07e1d"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.463798 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.463814 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.464925 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b6686bbd5-nnkl5" event={"ID":"b4c4d270-9b90-47d9-b076-feac4ab48232","Type":"ContainerStarted","Data":"40aadc59becb7c5f8cc2567ee8b195e18cc40ec161dcd92567b106b0aff43944"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.486863 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59df95cbb-xkgb8" podStartSLOduration=4.486839427 podStartE2EDuration="4.486839427s" podCreationTimestamp="2026-01-05 21:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:39.47819167 +0000 UTC m=+1054.434394139" watchObservedRunningTime="2026-01-05 21:51:39.486839427 +0000 UTC m=+1054.443041896" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.488326 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" event={"ID":"dc0b4eb9-6ea0-470c-b684-35945245161c","Type":"ContainerStarted","Data":"005f542087b8b3e9f2bfa9ce480b96f604c97637b49c641ce1d525d6e976852a"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.490949 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerID="21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff" exitCode=0 Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.491011 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" event={"ID":"dc114a40-a8fc-4199-bc0d-1044317b3e1e","Type":"ContainerDied","Data":"21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.491035 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" event={"ID":"dc114a40-a8fc-4199-bc0d-1044317b3e1e","Type":"ContainerStarted","Data":"997a35f3d168db1cfa1784ff0fd2a96a6992e786b65a41c4cdbdbc24ae768c67"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.502630 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerStarted","Data":"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.502669 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerStarted","Data":"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.502679 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerStarted","Data":"0c9a595feea571ed294b034f49133d8c1fab86cfa7f46748f9c337337ddb2c84"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.503605 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.503646 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.506241 5000 generic.go:334] "Generic (PLEG): container finished" podID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerID="28226b1d0d41ad62538f1f8c07ede3fe51fe633fad414b174aa289b8ed538265" exitCode=2 Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.506415 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerDied","Data":"28226b1d0d41ad62538f1f8c07ede3fe51fe633fad414b174aa289b8ed538265"} Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.577618 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podStartSLOduration=6.577599803 podStartE2EDuration="6.577599803s" podCreationTimestamp="2026-01-05 21:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:39.534378422 +0000 UTC m=+1054.490580891" watchObservedRunningTime="2026-01-05 21:51:39.577599803 +0000 UTC m=+1054.533802272" Jan 05 21:51:39 crc kubenswrapper[5000]: I0105 21:51:39.935229 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-prdrd" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014443 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014494 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014512 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014541 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7zdp\" (UniqueName: \"kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014560 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.014646 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id\") pod \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\" (UID: \"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00\") " Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.015037 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.019871 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts" (OuterVolumeSpecName: "scripts") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.019979 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.021041 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp" (OuterVolumeSpecName: "kube-api-access-s7zdp") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "kube-api-access-s7zdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.044693 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.062935 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data" (OuterVolumeSpecName: "config-data") pod "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" (UID: "4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.115927 5000 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.115958 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.115970 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7zdp\" (UniqueName: \"kubernetes.io/projected/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-kube-api-access-s7zdp\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.115984 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.115995 5000 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.116006 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.536251 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-prdrd" event={"ID":"4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00","Type":"ContainerDied","Data":"f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2"} Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.536492 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6eb4e841a82ea82a0a4a3ba527188e26a6331808414a47525b06f58b3a192d2" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.536553 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-prdrd" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.554563 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" event={"ID":"dc114a40-a8fc-4199-bc0d-1044317b3e1e","Type":"ContainerStarted","Data":"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2"} Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.555803 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.576520 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" podStartSLOduration=8.576503209 podStartE2EDuration="8.576503209s" podCreationTimestamp="2026-01-05 21:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:40.574201464 +0000 UTC m=+1055.530403953" watchObservedRunningTime="2026-01-05 21:51:40.576503209 +0000 UTC m=+1055.532705678" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.720146 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:40 crc kubenswrapper[5000]: E0105 21:51:40.720740 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" containerName="cinder-db-sync" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.720849 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" containerName="cinder-db-sync" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.721178 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" containerName="cinder-db-sync" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.722149 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.735411 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.735647 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.735588 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.735885 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j92rx" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.738412 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.826048 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838531 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838636 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7vh\" (UniqueName: \"kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838667 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838694 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838717 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.838751 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.876633 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.889498 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.902095 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943410 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklsh\" (UniqueName: \"kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943503 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943563 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943586 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943636 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943658 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7vh\" (UniqueName: \"kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943690 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943712 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943733 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943756 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943772 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943809 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.943913 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.949779 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.957346 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.965447 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.973085 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:40 crc kubenswrapper[5000]: I0105 21:51:40.992583 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7vh\" (UniqueName: \"kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh\") pod \"cinder-scheduler-0\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.042236 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050590 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklsh\" (UniqueName: \"kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050688 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050733 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050801 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050838 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.050879 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.052816 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.052944 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.053017 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.054565 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.054784 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.055923 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.059357 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.074226 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.075466 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.082084 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklsh\" (UniqueName: \"kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh\") pod \"dnsmasq-dns-5c9776ccc5-h8ftr\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.152913 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153294 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153422 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153455 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153479 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153507 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.153566 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsqc\" (UniqueName: \"kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.255946 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259026 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsqc\" (UniqueName: \"kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259186 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259272 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259389 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259427 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259473 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.259623 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.260383 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.262783 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.272567 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.272565 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.273631 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.275375 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.285576 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsqc\" (UniqueName: \"kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc\") pod \"cinder-api-0\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.382427 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.563447 5000 generic.go:334] "Generic (PLEG): container finished" podID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerID="a8c8b079ed669aad435f15885d78ccf11ab351d19b2b62085a1b352870fb5d13" exitCode=0 Jan 05 21:51:41 crc kubenswrapper[5000]: I0105 21:51:41.563529 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerDied","Data":"a8c8b079ed669aad435f15885d78ccf11ab351d19b2b62085a1b352870fb5d13"} Jan 05 21:51:42 crc kubenswrapper[5000]: W0105 21:51:42.122224 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb5bc15d_ce48_48fe_9c4e_3d18dbeabe9f.slice/crio-ced7dc86562a1a137054cdfc2daf639b48718d0589ace9882508e146500e76b8 WatchSource:0}: Error finding container ced7dc86562a1a137054cdfc2daf639b48718d0589ace9882508e146500e76b8: Status 404 returned error can't find the container with id ced7dc86562a1a137054cdfc2daf639b48718d0589ace9882508e146500e76b8 Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.140566 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.356760 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:42 crc kubenswrapper[5000]: W0105 21:51:42.359603 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1115d57d_cd28_4eea_b141_dcc46363b41f.slice/crio-4ce3e743cd95c15e3b4f1c97deda30a069d1ef10915455346eb8e3282f94a95b WatchSource:0}: Error finding container 4ce3e743cd95c15e3b4f1c97deda30a069d1ef10915455346eb8e3282f94a95b: Status 404 returned error can't find the container with id 4ce3e743cd95c15e3b4f1c97deda30a069d1ef10915455346eb8e3282f94a95b Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.461819 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.576402 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" event={"ID":"dc0b4eb9-6ea0-470c-b684-35945245161c","Type":"ContainerStarted","Data":"993f9d49b8a40663bfa218a0902ef1dfaa525412f150eb0ff44b567b2f100a4e"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.576460 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" event={"ID":"dc0b4eb9-6ea0-470c-b684-35945245161c","Type":"ContainerStarted","Data":"a11d6749037a6d0e0bc1dd87e4a16771114c314a15136590324d174ee1d23e07"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.578409 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" event={"ID":"2d8e5c82-8a89-4455-9c90-f69a8442822d","Type":"ContainerStarted","Data":"f43e008cf5ab9b82f3433f0a9ec3b50c2bfb7a049b659b810700bdbaf9aabecd"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.579534 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerStarted","Data":"ced7dc86562a1a137054cdfc2daf639b48718d0589ace9882508e146500e76b8"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.582337 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerStarted","Data":"4ce3e743cd95c15e3b4f1c97deda30a069d1ef10915455346eb8e3282f94a95b"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.589501 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b6686bbd5-nnkl5" event={"ID":"b4c4d270-9b90-47d9-b076-feac4ab48232","Type":"ContainerStarted","Data":"c4b491efcbd39916517f172fb7126b877f2496cf07d73220a5018b916f6f5ab0"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.589650 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="dnsmasq-dns" containerID="cri-o://6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2" gracePeriod=10 Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.589669 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b6686bbd5-nnkl5" event={"ID":"b4c4d270-9b90-47d9-b076-feac4ab48232","Type":"ContainerStarted","Data":"c5dc762a21505147f090d3d260a60e312d112ad365dbf3155de192cd119d3245"} Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.601020 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b7c959586-6rv2n" podStartSLOduration=7.471193281 podStartE2EDuration="10.601002391s" podCreationTimestamp="2026-01-05 21:51:32 +0000 UTC" firstStartedPulling="2026-01-05 21:51:38.534149998 +0000 UTC m=+1053.490352477" lastFinishedPulling="2026-01-05 21:51:41.663959118 +0000 UTC m=+1056.620161587" observedRunningTime="2026-01-05 21:51:42.592720545 +0000 UTC m=+1057.548923014" watchObservedRunningTime="2026-01-05 21:51:42.601002391 +0000 UTC m=+1057.557204850" Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.618734 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-b6686bbd5-nnkl5" podStartSLOduration=7.91540762 podStartE2EDuration="10.618716236s" podCreationTimestamp="2026-01-05 21:51:32 +0000 UTC" firstStartedPulling="2026-01-05 21:51:38.960083426 +0000 UTC m=+1053.916285895" lastFinishedPulling="2026-01-05 21:51:41.663392052 +0000 UTC m=+1056.619594511" observedRunningTime="2026-01-05 21:51:42.616110732 +0000 UTC m=+1057.572313201" watchObservedRunningTime="2026-01-05 21:51:42.618716236 +0000 UTC m=+1057.574918695" Jan 05 21:51:42 crc kubenswrapper[5000]: I0105 21:51:42.747975 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.199016 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217517 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5f6b\" (UniqueName: \"kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217599 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217643 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217726 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217773 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.217861 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc\") pod \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\" (UID: \"dc114a40-a8fc-4199-bc0d-1044317b3e1e\") " Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.222500 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b" (OuterVolumeSpecName: "kube-api-access-g5f6b") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "kube-api-access-g5f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.305734 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.306482 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.316384 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config" (OuterVolumeSpecName: "config") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.320448 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5f6b\" (UniqueName: \"kubernetes.io/projected/dc114a40-a8fc-4199-bc0d-1044317b3e1e-kube-api-access-g5f6b\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.320966 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.321046 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.321135 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.322489 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.330105 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc114a40-a8fc-4199-bc0d-1044317b3e1e" (UID: "dc114a40-a8fc-4199-bc0d-1044317b3e1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.422282 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.422318 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc114a40-a8fc-4199-bc0d-1044317b3e1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.604234 5000 generic.go:334] "Generic (PLEG): container finished" podID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerID="7dc5ff7af7e132dd185bb34b05348536057cbb17dde36cb085b1b814f4cfa93a" exitCode=0 Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.604321 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" event={"ID":"2d8e5c82-8a89-4455-9c90-f69a8442822d","Type":"ContainerDied","Data":"7dc5ff7af7e132dd185bb34b05348536057cbb17dde36cb085b1b814f4cfa93a"} Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.621338 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerID="6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2" exitCode=0 Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.621442 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" event={"ID":"dc114a40-a8fc-4199-bc0d-1044317b3e1e","Type":"ContainerDied","Data":"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2"} Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.621472 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" event={"ID":"dc114a40-a8fc-4199-bc0d-1044317b3e1e","Type":"ContainerDied","Data":"997a35f3d168db1cfa1784ff0fd2a96a6992e786b65a41c4cdbdbc24ae768c67"} Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.621492 5000 scope.go:117] "RemoveContainer" containerID="6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.621645 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zf8jj" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.634682 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerStarted","Data":"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33"} Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.654669 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.661593 5000 scope.go:117] "RemoveContainer" containerID="21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.670638 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zf8jj"] Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.707015 5000 scope.go:117] "RemoveContainer" containerID="6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2" Jan 05 21:51:43 crc kubenswrapper[5000]: E0105 21:51:43.707571 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2\": container with ID starting with 6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2 not found: ID does not exist" containerID="6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.707621 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2"} err="failed to get container status \"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2\": rpc error: code = NotFound desc = could not find container \"6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2\": container with ID starting with 6b7dcfefc8c23100e8d5407c5146fff050075e92110903e911fe345c79fd24e2 not found: ID does not exist" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.707657 5000 scope.go:117] "RemoveContainer" containerID="21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff" Jan 05 21:51:43 crc kubenswrapper[5000]: E0105 21:51:43.708079 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff\": container with ID starting with 21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff not found: ID does not exist" containerID="21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff" Jan 05 21:51:43 crc kubenswrapper[5000]: I0105 21:51:43.708110 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff"} err="failed to get container status \"21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff\": rpc error: code = NotFound desc = could not find container \"21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff\": container with ID starting with 21573a7a1cfbd8007b4b0db1f39711da5a17c12286ebf4cd5c61a69cc63cb2ff not found: ID does not exist" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.530495 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.645985 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerStarted","Data":"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0"} Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.646070 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerStarted","Data":"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a"} Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.649033 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api-log" containerID="cri-o://e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" gracePeriod=30 Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.649445 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerStarted","Data":"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9"} Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.649581 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.649680 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api" containerID="cri-o://b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" gracePeriod=30 Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.655816 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" event={"ID":"2d8e5c82-8a89-4455-9c90-f69a8442822d","Type":"ContainerStarted","Data":"356fe08f62620158b7489eb2ff279c54719bb52b2d93daab2de4106c7342bf92"} Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.656240 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.667227 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.77903587 podStartE2EDuration="4.667206781s" podCreationTimestamp="2026-01-05 21:51:40 +0000 UTC" firstStartedPulling="2026-01-05 21:51:42.131524522 +0000 UTC m=+1057.087726991" lastFinishedPulling="2026-01-05 21:51:43.019695423 +0000 UTC m=+1057.975897902" observedRunningTime="2026-01-05 21:51:44.666478861 +0000 UTC m=+1059.622681330" watchObservedRunningTime="2026-01-05 21:51:44.667206781 +0000 UTC m=+1059.623409250" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.693793 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.693777989 podStartE2EDuration="4.693777989s" podCreationTimestamp="2026-01-05 21:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:44.693682616 +0000 UTC m=+1059.649885085" watchObservedRunningTime="2026-01-05 21:51:44.693777989 +0000 UTC m=+1059.649980458" Jan 05 21:51:44 crc kubenswrapper[5000]: I0105 21:51:44.733374 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" podStartSLOduration=4.733356786 podStartE2EDuration="4.733356786s" podCreationTimestamp="2026-01-05 21:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:44.731403061 +0000 UTC m=+1059.687605530" watchObservedRunningTime="2026-01-05 21:51:44.733356786 +0000 UTC m=+1059.689559255" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.254348 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.335446 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" path="/var/lib/kubelet/pods/dc114a40-a8fc-4199-bc0d-1044317b3e1e/volumes" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.364807 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366111 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gsqc\" (UniqueName: \"kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366197 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366230 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366259 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366347 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366373 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle\") pod \"1115d57d-cd28-4eea-b141-dcc46363b41f\" (UID: \"1115d57d-cd28-4eea-b141-dcc46363b41f\") " Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366366 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.366803 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs" (OuterVolumeSpecName: "logs") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.367150 5000 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1115d57d-cd28-4eea-b141-dcc46363b41f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.367172 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1115d57d-cd28-4eea-b141-dcc46363b41f-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.372286 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc" (OuterVolumeSpecName: "kube-api-access-5gsqc") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "kube-api-access-5gsqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.375008 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts" (OuterVolumeSpecName: "scripts") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.386116 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.401037 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.429221 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data" (OuterVolumeSpecName: "config-data") pod "1115d57d-cd28-4eea-b141-dcc46363b41f" (UID: "1115d57d-cd28-4eea-b141-dcc46363b41f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.469262 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.469315 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.469331 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.469342 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gsqc\" (UniqueName: \"kubernetes.io/projected/1115d57d-cd28-4eea-b141-dcc46363b41f-kube-api-access-5gsqc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.469355 5000 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1115d57d-cd28-4eea-b141-dcc46363b41f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.667339 5000 generic.go:334] "Generic (PLEG): container finished" podID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerID="b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" exitCode=0 Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.667377 5000 generic.go:334] "Generic (PLEG): container finished" podID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerID="e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" exitCode=143 Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.668353 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.674934 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerDied","Data":"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9"} Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.674988 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerDied","Data":"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33"} Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.675004 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1115d57d-cd28-4eea-b141-dcc46363b41f","Type":"ContainerDied","Data":"4ce3e743cd95c15e3b4f1c97deda30a069d1ef10915455346eb8e3282f94a95b"} Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.675023 5000 scope.go:117] "RemoveContainer" containerID="b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.703204 5000 scope.go:117] "RemoveContainer" containerID="e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.717649 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.723575 5000 scope.go:117] "RemoveContainer" containerID="b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.723988 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9\": container with ID starting with b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9 not found: ID does not exist" containerID="b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.724032 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9"} err="failed to get container status \"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9\": rpc error: code = NotFound desc = could not find container \"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9\": container with ID starting with b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9 not found: ID does not exist" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.724056 5000 scope.go:117] "RemoveContainer" containerID="e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.724842 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33\": container with ID starting with e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33 not found: ID does not exist" containerID="e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.724865 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33"} err="failed to get container status \"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33\": rpc error: code = NotFound desc = could not find container \"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33\": container with ID starting with e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33 not found: ID does not exist" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.724879 5000 scope.go:117] "RemoveContainer" containerID="b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.727881 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9"} err="failed to get container status \"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9\": rpc error: code = NotFound desc = could not find container \"b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9\": container with ID starting with b9c1a486079c68cd1286c5730e7ba7622279b51abf1fe3d58531a8ec106d9db9 not found: ID does not exist" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.727935 5000 scope.go:117] "RemoveContainer" containerID="e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.728311 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33"} err="failed to get container status \"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33\": rpc error: code = NotFound desc = could not find container \"e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33\": container with ID starting with e659eb552d3d9e08dcb9742ef55f9619b5df7dc62ee30b95d6eca133d4bfaa33 not found: ID does not exist" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.736725 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.747725 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.748154 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="dnsmasq-dns" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748175 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="dnsmasq-dns" Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.748198 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="init" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748204 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="init" Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.748216 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748222 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api" Jan 05 21:51:45 crc kubenswrapper[5000]: E0105 21:51:45.748240 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api-log" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748246 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api-log" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748397 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc114a40-a8fc-4199-bc0d-1044317b3e1e" containerName="dnsmasq-dns" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748414 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.748429 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" containerName="cinder-api-log" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.749425 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.750859 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.752369 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.752517 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.783982 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879289 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879350 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879381 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-scripts\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879461 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3278f23c-9157-4155-b406-e1ff0591348e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879591 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data-custom\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879741 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278f23c-9157-4155-b406-e1ff0591348e-logs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.879805 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49stw\" (UniqueName: \"kubernetes.io/projected/3278f23c-9157-4155-b406-e1ff0591348e-kube-api-access-49stw\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.880063 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.981852 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.981968 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982028 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982066 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-scripts\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982095 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982180 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3278f23c-9157-4155-b406-e1ff0591348e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982218 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data-custom\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982259 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278f23c-9157-4155-b406-e1ff0591348e-logs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982284 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49stw\" (UniqueName: \"kubernetes.io/projected/3278f23c-9157-4155-b406-e1ff0591348e-kube-api-access-49stw\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.982975 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3278f23c-9157-4155-b406-e1ff0591348e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.983331 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278f23c-9157-4155-b406-e1ff0591348e-logs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.987697 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-scripts\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.988275 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.988702 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.996339 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.996738 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data-custom\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:45 crc kubenswrapper[5000]: I0105 21:51:45.998560 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278f23c-9157-4155-b406-e1ff0591348e-config-data\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.001680 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49stw\" (UniqueName: \"kubernetes.io/projected/3278f23c-9157-4155-b406-e1ff0591348e-kube-api-access-49stw\") pod \"cinder-api-0\" (UID: \"3278f23c-9157-4155-b406-e1ff0591348e\") " pod="openstack/cinder-api-0" Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.076177 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.078613 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.364572 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86bdcd58d9-pztv2" Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.459381 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.459596 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fbbd8fdfb-jb8jh" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-api" containerID="cri-o://e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528" gracePeriod=30 Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.460040 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fbbd8fdfb-jb8jh" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-httpd" containerID="cri-o://fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4" gracePeriod=30 Jan 05 21:51:46 crc kubenswrapper[5000]: I0105 21:51:46.678710 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.335847 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1115d57d-cd28-4eea-b141-dcc46363b41f" path="/var/lib/kubelet/pods/1115d57d-cd28-4eea-b141-dcc46363b41f/volumes" Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.784963 5000 generic.go:334] "Generic (PLEG): container finished" podID="36acfd32-be57-4078-a5a6-b31cf5608620" containerID="c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a" exitCode=137 Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.785161 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerDied","Data":"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a"} Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.788715 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3278f23c-9157-4155-b406-e1ff0591348e","Type":"ContainerStarted","Data":"5772d1c7f48de8f5175f5c2ef7926cbebb2d32b568c02607600bff5e15dac154"} Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.788761 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3278f23c-9157-4155-b406-e1ff0591348e","Type":"ContainerStarted","Data":"61a5e95c47cf1ed169e4596380aa5cb78c9433f32912b0ae81359528cb58dd3c"} Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.809510 5000 generic.go:334] "Generic (PLEG): container finished" podID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerID="fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4" exitCode=0 Jan 05 21:51:47 crc kubenswrapper[5000]: I0105 21:51:47.809566 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerDied","Data":"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4"} Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.299767 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.386342 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.391159 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.450517 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.466516 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts\") pod \"36acfd32-be57-4078-a5a6-b31cf5608620\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.466588 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs\") pod \"36acfd32-be57-4078-a5a6-b31cf5608620\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.466750 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data\") pod \"36acfd32-be57-4078-a5a6-b31cf5608620\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.466788 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdb6\" (UniqueName: \"kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6\") pod \"36acfd32-be57-4078-a5a6-b31cf5608620\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.466835 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key\") pod \"36acfd32-be57-4078-a5a6-b31cf5608620\" (UID: \"36acfd32-be57-4078-a5a6-b31cf5608620\") " Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.467153 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs" (OuterVolumeSpecName: "logs") pod "36acfd32-be57-4078-a5a6-b31cf5608620" (UID: "36acfd32-be57-4078-a5a6-b31cf5608620"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.467459 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36acfd32-be57-4078-a5a6-b31cf5608620-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.477009 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "36acfd32-be57-4078-a5a6-b31cf5608620" (UID: "36acfd32-be57-4078-a5a6-b31cf5608620"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.491062 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6" (OuterVolumeSpecName: "kube-api-access-gbdb6") pod "36acfd32-be57-4078-a5a6-b31cf5608620" (UID: "36acfd32-be57-4078-a5a6-b31cf5608620"). InnerVolumeSpecName "kube-api-access-gbdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.507604 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data" (OuterVolumeSpecName: "config-data") pod "36acfd32-be57-4078-a5a6-b31cf5608620" (UID: "36acfd32-be57-4078-a5a6-b31cf5608620"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.522458 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts" (OuterVolumeSpecName: "scripts") pod "36acfd32-be57-4078-a5a6-b31cf5608620" (UID: "36acfd32-be57-4078-a5a6-b31cf5608620"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.569787 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.569867 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdb6\" (UniqueName: \"kubernetes.io/projected/36acfd32-be57-4078-a5a6-b31cf5608620-kube-api-access-gbdb6\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.569881 5000 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36acfd32-be57-4078-a5a6-b31cf5608620-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.569917 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36acfd32-be57-4078-a5a6-b31cf5608620-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.747955 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59df95cbb-xkgb8" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.825165 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.825397 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" containerID="cri-o://01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c" gracePeriod=30 Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.825696 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" containerID="cri-o://9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e" gracePeriod=30 Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.836174 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": EOF" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.836593 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": EOF" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.836994 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": EOF" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.840296 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": EOF" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.840414 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": EOF" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.880215 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3278f23c-9157-4155-b406-e1ff0591348e","Type":"ContainerStarted","Data":"91cddf7896a7c4e61191e05a6c932247309d9b483828e6196170464eb4c4d7b3"} Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.880598 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.911231 5000 generic.go:334] "Generic (PLEG): container finished" podID="36acfd32-be57-4078-a5a6-b31cf5608620" containerID="a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c" exitCode=137 Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.911315 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f9b6995df-77gt4" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.911374 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerDied","Data":"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c"} Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.911431 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f9b6995df-77gt4" event={"ID":"36acfd32-be57-4078-a5a6-b31cf5608620","Type":"ContainerDied","Data":"01a54c25a964ed21dc9da2fd310620f4c15ff3834a121e638f5169f83f58e403"} Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.911454 5000 scope.go:117] "RemoveContainer" containerID="a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.916595 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.9165781859999997 podStartE2EDuration="3.916578186s" podCreationTimestamp="2026-01-05 21:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:48.915383462 +0000 UTC m=+1063.871585921" watchObservedRunningTime="2026-01-05 21:51:48.916578186 +0000 UTC m=+1063.872780655" Jan 05 21:51:48 crc kubenswrapper[5000]: I0105 21:51:48.982237 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.023520 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f9b6995df-77gt4"] Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.174072 5000 scope.go:117] "RemoveContainer" containerID="c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.224040 5000 scope.go:117] "RemoveContainer" containerID="a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c" Jan 05 21:51:49 crc kubenswrapper[5000]: E0105 21:51:49.226466 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c\": container with ID starting with a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c not found: ID does not exist" containerID="a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.226514 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c"} err="failed to get container status \"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c\": rpc error: code = NotFound desc = could not find container \"a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c\": container with ID starting with a989d89bbd9bc4cabf2763799aeb94a684136555a2bc1f090e37cbe69b1c7c4c not found: ID does not exist" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.226539 5000 scope.go:117] "RemoveContainer" containerID="c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a" Jan 05 21:51:49 crc kubenswrapper[5000]: E0105 21:51:49.230010 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a\": container with ID starting with c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a not found: ID does not exist" containerID="c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.230049 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a"} err="failed to get container status \"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a\": rpc error: code = NotFound desc = could not find container \"c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a\": container with ID starting with c5cdd304dab123e293afdbd9cf3acb578b37e7f021660d265c546d998cc5db0a not found: ID does not exist" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.335680 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" path="/var/lib/kubelet/pods/36acfd32-be57-4078-a5a6-b31cf5608620/volumes" Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.954101 5000 generic.go:334] "Generic (PLEG): container finished" podID="0599e537-6c53-4038-893f-4fb7f421c021" containerID="01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c" exitCode=143 Jan 05 21:51:49 crc kubenswrapper[5000]: I0105 21:51:49.955438 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerDied","Data":"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c"} Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.484378 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f48b4784d-5jgvr" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.550531 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.550745 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon-log" containerID="cri-o://2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85" gracePeriod=30 Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.550999 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" containerID="cri-o://d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769" gracePeriod=30 Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.557169 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.564181 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:48140->10.217.0.152:8443: read: connection reset by peer" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.946368 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.964954 5000 generic.go:334] "Generic (PLEG): container finished" podID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerID="e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528" exitCode=0 Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.965000 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerDied","Data":"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528"} Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.965022 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fbbd8fdfb-jb8jh" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.965042 5000 scope.go:117] "RemoveContainer" containerID="fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4" Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.965030 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fbbd8fdfb-jb8jh" event={"ID":"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b","Type":"ContainerDied","Data":"8ee75ad7ac9296cde9c2d87201cf67c3c3d2eb0751247a6c058508a7895c5d93"} Jan 05 21:51:50 crc kubenswrapper[5000]: I0105 21:51:50.987406 5000 scope.go:117] "RemoveContainer" containerID="e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.010164 5000 scope.go:117] "RemoveContainer" containerID="fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4" Jan 05 21:51:51 crc kubenswrapper[5000]: E0105 21:51:51.010626 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4\": container with ID starting with fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4 not found: ID does not exist" containerID="fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.010669 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4"} err="failed to get container status \"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4\": rpc error: code = NotFound desc = could not find container \"fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4\": container with ID starting with fdd4857884fb4751b376b5e0d8d89a5dfe13335920aefaad48554cfd07b5bfe4 not found: ID does not exist" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.010690 5000 scope.go:117] "RemoveContainer" containerID="e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528" Jan 05 21:51:51 crc kubenswrapper[5000]: E0105 21:51:51.011083 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528\": container with ID starting with e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528 not found: ID does not exist" containerID="e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.011109 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528"} err="failed to get container status \"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528\": rpc error: code = NotFound desc = could not find container \"e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528\": container with ID starting with e0f390a4c170a3b48071ee13d90429ff93ff0c8145351db2e1333d0469d6d528 not found: ID does not exist" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.126386 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd6f8\" (UniqueName: \"kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8\") pod \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.126722 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle\") pod \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.126765 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config\") pod \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.126844 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs\") pod \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.126882 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config\") pod \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\" (UID: \"4c0a99dd-168d-4462-9aaf-aef2e16c9a0b\") " Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.134443 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8" (OuterVolumeSpecName: "kube-api-access-zd6f8") pod "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" (UID: "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b"). InnerVolumeSpecName "kube-api-access-zd6f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.149254 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" (UID: "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.186145 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config" (OuterVolumeSpecName: "config") pod "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" (UID: "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.193735 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" (UID: "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.232090 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.232125 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd6f8\" (UniqueName: \"kubernetes.io/projected/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-kube-api-access-zd6f8\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.232139 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.232150 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.247210 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" (UID: "4c0a99dd-168d-4462-9aaf-aef2e16c9a0b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.265072 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.333850 5000 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.334800 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.343825 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fbbd8fdfb-jb8jh"] Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.401408 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.401922 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="dnsmasq-dns" containerID="cri-o://4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c" gracePeriod=10 Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.420597 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.516227 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.942139 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.975519 5000 generic.go:334] "Generic (PLEG): container finished" podID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerID="4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c" exitCode=0 Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.975707 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="cinder-scheduler" containerID="cri-o://db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a" gracePeriod=30 Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.975959 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.976334 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" event={"ID":"035df708-e6ab-4ed5-9dc8-53f8e1da793b","Type":"ContainerDied","Data":"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c"} Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.976360 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-dz9jp" event={"ID":"035df708-e6ab-4ed5-9dc8-53f8e1da793b","Type":"ContainerDied","Data":"02f179587957706ea974b6009955d1651df49906b3959e014438e7af876ea8e7"} Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.976375 5000 scope.go:117] "RemoveContainer" containerID="4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c" Jan 05 21:51:51 crc kubenswrapper[5000]: I0105 21:51:51.976438 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="probe" containerID="cri-o://9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0" gracePeriod=30 Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.004362 5000 scope.go:117] "RemoveContainer" containerID="348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.032491 5000 scope.go:117] "RemoveContainer" containerID="4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c" Jan 05 21:51:52 crc kubenswrapper[5000]: E0105 21:51:52.032867 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c\": container with ID starting with 4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c not found: ID does not exist" containerID="4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.032925 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c"} err="failed to get container status \"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c\": rpc error: code = NotFound desc = could not find container \"4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c\": container with ID starting with 4cc2c9cc1b0017b1326a610aa2adb30a6ff5f9969a9cbcc7ce849c7d9ce4537c not found: ID does not exist" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.032956 5000 scope.go:117] "RemoveContainer" containerID="348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6" Jan 05 21:51:52 crc kubenswrapper[5000]: E0105 21:51:52.033244 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6\": container with ID starting with 348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6 not found: ID does not exist" containerID="348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.033267 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6"} err="failed to get container status \"348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6\": rpc error: code = NotFound desc = could not find container \"348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6\": container with ID starting with 348efc7f982b20e099f7ddad5d31dd8f2038a0f766572d30749faea05a5aabf6 not found: ID does not exist" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.045669 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.045759 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.045800 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.045862 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.045907 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.046002 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ffm7\" (UniqueName: \"kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7\") pod \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\" (UID: \"035df708-e6ab-4ed5-9dc8-53f8e1da793b\") " Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.057046 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7" (OuterVolumeSpecName: "kube-api-access-6ffm7") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "kube-api-access-6ffm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.093760 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.096264 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config" (OuterVolumeSpecName: "config") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.098419 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.098795 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.111824 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "035df708-e6ab-4ed5-9dc8-53f8e1da793b" (UID: "035df708-e6ab-4ed5-9dc8-53f8e1da793b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.147678 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.148032 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.148051 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.148062 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ffm7\" (UniqueName: \"kubernetes.io/projected/035df708-e6ab-4ed5-9dc8-53f8e1da793b-kube-api-access-6ffm7\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.148076 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.148087 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035df708-e6ab-4ed5-9dc8-53f8e1da793b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.305598 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.337495 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-dz9jp"] Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.988202 5000 generic.go:334] "Generic (PLEG): container finished" podID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerID="9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0" exitCode=0 Jan 05 21:51:52 crc kubenswrapper[5000]: I0105 21:51:52.988281 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerDied","Data":"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0"} Jan 05 21:51:53 crc kubenswrapper[5000]: I0105 21:51:53.335791 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" path="/var/lib/kubelet/pods/035df708-e6ab-4ed5-9dc8-53f8e1da793b/volumes" Jan 05 21:51:53 crc kubenswrapper[5000]: I0105 21:51:53.336787 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" path="/var/lib/kubelet/pods/4c0a99dd-168d-4462-9aaf-aef2e16c9a0b/volumes" Jan 05 21:51:53 crc kubenswrapper[5000]: I0105 21:51:53.882220 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.243656 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:37306->10.217.0.165:9311: read: connection reset by peer" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.244514 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:37322->10.217.0.165:9311: read: connection reset by peer" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.725188 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.760619 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.767392 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782068 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle\") pod \"0599e537-6c53-4038-893f-4fb7f421c021\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782197 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782235 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs\") pod \"0599e537-6c53-4038-893f-4fb7f421c021\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782299 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w7vh\" (UniqueName: \"kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782333 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom\") pod \"0599e537-6c53-4038-893f-4fb7f421c021\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782354 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782405 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782458 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7glp\" (UniqueName: \"kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp\") pod \"0599e537-6c53-4038-893f-4fb7f421c021\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782490 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data\") pod \"0599e537-6c53-4038-893f-4fb7f421c021\" (UID: \"0599e537-6c53-4038-893f-4fb7f421c021\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782521 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.782560 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom\") pod \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\" (UID: \"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f\") " Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.783531 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.783955 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs" (OuterVolumeSpecName: "logs") pod "0599e537-6c53-4038-893f-4fb7f421c021" (UID: "0599e537-6c53-4038-893f-4fb7f421c021"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.789489 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.791542 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh" (OuterVolumeSpecName: "kube-api-access-6w7vh") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "kube-api-access-6w7vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.792764 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts" (OuterVolumeSpecName: "scripts") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.795965 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp" (OuterVolumeSpecName: "kube-api-access-s7glp") pod "0599e537-6c53-4038-893f-4fb7f421c021" (UID: "0599e537-6c53-4038-893f-4fb7f421c021"). InnerVolumeSpecName "kube-api-access-s7glp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.819457 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0599e537-6c53-4038-893f-4fb7f421c021" (UID: "0599e537-6c53-4038-893f-4fb7f421c021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.821583 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0599e537-6c53-4038-893f-4fb7f421c021" (UID: "0599e537-6c53-4038-893f-4fb7f421c021"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.855292 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data" (OuterVolumeSpecName: "config-data") pod "0599e537-6c53-4038-893f-4fb7f421c021" (UID: "0599e537-6c53-4038-893f-4fb7f421c021"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.878915 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884025 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884058 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0599e537-6c53-4038-893f-4fb7f421c021-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884067 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w7vh\" (UniqueName: \"kubernetes.io/projected/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-kube-api-access-6w7vh\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884077 5000 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884086 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884094 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7glp\" (UniqueName: \"kubernetes.io/projected/0599e537-6c53-4038-893f-4fb7f421c021-kube-api-access-s7glp\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884103 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884111 5000 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884122 5000 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.884130 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0599e537-6c53-4038-893f-4fb7f421c021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.904498 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data" (OuterVolumeSpecName: "config-data") pod "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" (UID: "bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:51:55 crc kubenswrapper[5000]: I0105 21:51:55.985374 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.014542 5000 generic.go:334] "Generic (PLEG): container finished" podID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerID="db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a" exitCode=0 Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.014592 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerDied","Data":"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a"} Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.014629 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.014651 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f","Type":"ContainerDied","Data":"ced7dc86562a1a137054cdfc2daf639b48718d0589ace9882508e146500e76b8"} Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.014674 5000 scope.go:117] "RemoveContainer" containerID="9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.017183 5000 generic.go:334] "Generic (PLEG): container finished" podID="0599e537-6c53-4038-893f-4fb7f421c021" containerID="9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e" exitCode=0 Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.017289 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.017385 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerDied","Data":"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e"} Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.017426 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dd4b4f4d-j4qq5" event={"ID":"0599e537-6c53-4038-893f-4fb7f421c021","Type":"ContainerDied","Data":"0c9a595feea571ed294b034f49133d8c1fab86cfa7f46748f9c337337ddb2c84"} Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.022000 5000 generic.go:334] "Generic (PLEG): container finished" podID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerID="d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769" exitCode=0 Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.022038 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerDied","Data":"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769"} Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.041354 5000 scope.go:117] "RemoveContainer" containerID="db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.056249 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.064149 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58dd4b4f4d-j4qq5"] Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.070141 5000 scope.go:117] "RemoveContainer" containerID="9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.070659 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0\": container with ID starting with 9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0 not found: ID does not exist" containerID="9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.070721 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0"} err="failed to get container status \"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0\": rpc error: code = NotFound desc = could not find container \"9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0\": container with ID starting with 9141b2a78572db48a059dab786e46d330778ddb5fcddf29a8e24fc46d72174a0 not found: ID does not exist" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.070755 5000 scope.go:117] "RemoveContainer" containerID="db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.072411 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a\": container with ID starting with db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a not found: ID does not exist" containerID="db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.072526 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a"} err="failed to get container status \"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a\": rpc error: code = NotFound desc = could not find container \"db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a\": container with ID starting with db7afb4a88b62dbd1b089ffc5e649c1b010b56b891c3ed7878c49b2d0f9fe60a not found: ID does not exist" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.072606 5000 scope.go:117] "RemoveContainer" containerID="9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.073231 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.085522 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146332 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.146871 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="init" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146897 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="init" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.146917 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-httpd" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146923 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-httpd" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.146936 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="dnsmasq-dns" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146945 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="dnsmasq-dns" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.146967 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146973 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.146986 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-api" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.146991 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-api" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.147001 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147007 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.147024 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="cinder-scheduler" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147030 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="cinder-scheduler" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.147056 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147063 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.147097 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="probe" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147103 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="probe" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.147115 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon-log" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147121 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon-log" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147390 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-api" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147406 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0a99dd-168d-4462-9aaf-aef2e16c9a0b" containerName="neutron-httpd" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147425 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api-log" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147436 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="cinder-scheduler" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147462 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147481 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" containerName="probe" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147493 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="36acfd32-be57-4078-a5a6-b31cf5608620" containerName="horizon-log" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147511 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="035df708-e6ab-4ed5-9dc8-53f8e1da793b" containerName="dnsmasq-dns" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.147527 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0599e537-6c53-4038-893f-4fb7f421c021" containerName="barbican-api" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.149647 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.153664 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.163135 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.183069 5000 scope.go:117] "RemoveContainer" containerID="01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188175 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188266 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188314 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188331 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvtl\" (UniqueName: \"kubernetes.io/projected/2ed63e4c-9365-423b-8eaf-a959b812ed86-kube-api-access-wzvtl\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188407 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed63e4c-9365-423b-8eaf-a959b812ed86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.188459 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.213330 5000 scope.go:117] "RemoveContainer" containerID="9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.213759 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e\": container with ID starting with 9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e not found: ID does not exist" containerID="9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.213821 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e"} err="failed to get container status \"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e\": rpc error: code = NotFound desc = could not find container \"9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e\": container with ID starting with 9e30eaff39806a98da7f5f3fdfea13218e33e230058250dafcf16b23032a5d2e not found: ID does not exist" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.213858 5000 scope.go:117] "RemoveContainer" containerID="01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c" Jan 05 21:51:56 crc kubenswrapper[5000]: E0105 21:51:56.214276 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c\": container with ID starting with 01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c not found: ID does not exist" containerID="01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.214324 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c"} err="failed to get container status \"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c\": rpc error: code = NotFound desc = could not find container \"01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c\": container with ID starting with 01b24c8f2fc9f0b73154932c01d3b2db6a10d1512a47f52947cf3e687af42a2c not found: ID does not exist" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.290166 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.290265 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.290294 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzvtl\" (UniqueName: \"kubernetes.io/projected/2ed63e4c-9365-423b-8eaf-a959b812ed86-kube-api-access-wzvtl\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.291176 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed63e4c-9365-423b-8eaf-a959b812ed86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.291213 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ed63e4c-9365-423b-8eaf-a959b812ed86-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.291282 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.291532 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.295541 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.295610 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.295846 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.297526 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed63e4c-9365-423b-8eaf-a959b812ed86-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.311255 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzvtl\" (UniqueName: \"kubernetes.io/projected/2ed63e4c-9365-423b-8eaf-a959b812ed86-kube-api-access-wzvtl\") pod \"cinder-scheduler-0\" (UID: \"2ed63e4c-9365-423b-8eaf-a959b812ed86\") " pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.477435 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 05 21:51:56 crc kubenswrapper[5000]: I0105 21:51:56.931533 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 05 21:51:57 crc kubenswrapper[5000]: I0105 21:51:57.038905 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ed63e4c-9365-423b-8eaf-a959b812ed86","Type":"ContainerStarted","Data":"099ee478deb8ed56f92f37f42978bd134ad7f3a0db3b3c2a419ef9a629181936"} Jan 05 21:51:57 crc kubenswrapper[5000]: I0105 21:51:57.342392 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0599e537-6c53-4038-893f-4fb7f421c021" path="/var/lib/kubelet/pods/0599e537-6c53-4038-893f-4fb7f421c021/volumes" Jan 05 21:51:57 crc kubenswrapper[5000]: I0105 21:51:57.343442 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f" path="/var/lib/kubelet/pods/bb5bc15d-ce48-48fe-9c4e-3d18dbeabe9f/volumes" Jan 05 21:51:58 crc kubenswrapper[5000]: I0105 21:51:58.023650 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 05 21:51:58 crc kubenswrapper[5000]: I0105 21:51:58.066492 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ed63e4c-9365-423b-8eaf-a959b812ed86","Type":"ContainerStarted","Data":"3e45e6284b0b81b56705dbc03b4d39b00090e223445ac6df74c4d0d169520914"} Jan 05 21:51:58 crc kubenswrapper[5000]: I0105 21:51:58.066534 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ed63e4c-9365-423b-8eaf-a959b812ed86","Type":"ContainerStarted","Data":"8d7a83e20af912e9121340bdae0d86e121542badc9446196ebd6c8560dc450a7"} Jan 05 21:51:58 crc kubenswrapper[5000]: I0105 21:51:58.093352 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.093337067 podStartE2EDuration="2.093337067s" podCreationTimestamp="2026-01-05 21:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:51:58.091809153 +0000 UTC m=+1073.048011632" watchObservedRunningTime="2026-01-05 21:51:58.093337067 +0000 UTC m=+1073.049539536" Jan 05 21:51:59 crc kubenswrapper[5000]: I0105 21:51:59.652534 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:51:59 crc kubenswrapper[5000]: I0105 21:51:59.729004 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-859855f89d-t6p2g" Jan 05 21:52:00 crc kubenswrapper[5000]: I0105 21:52:00.410503 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6c8579bfdd-r7vxj" Jan 05 21:52:01 crc kubenswrapper[5000]: I0105 21:52:01.478469 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.289577 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.290939 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.294689 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.294691 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.295980 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-q948k" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.302697 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.417332 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz85p\" (UniqueName: \"kubernetes.io/projected/046f24d3-66d8-4a8b-bd20-d1f79426033b-kube-api-access-vz85p\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.417572 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.417650 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.417724 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config-secret\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.519824 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.519868 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config-secret\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.519977 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz85p\" (UniqueName: \"kubernetes.io/projected/046f24d3-66d8-4a8b-bd20-d1f79426033b-kube-api-access-vz85p\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.520052 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.520660 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.533364 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-openstack-config-secret\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.534531 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046f24d3-66d8-4a8b-bd20-d1f79426033b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.537203 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz85p\" (UniqueName: \"kubernetes.io/projected/046f24d3-66d8-4a8b-bd20-d1f79426033b-kube-api-access-vz85p\") pod \"openstackclient\" (UID: \"046f24d3-66d8-4a8b-bd20-d1f79426033b\") " pod="openstack/openstackclient" Jan 05 21:52:03 crc kubenswrapper[5000]: I0105 21:52:03.609660 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 05 21:52:04 crc kubenswrapper[5000]: I0105 21:52:04.132615 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 05 21:52:05 crc kubenswrapper[5000]: I0105 21:52:05.136050 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"046f24d3-66d8-4a8b-bd20-d1f79426033b","Type":"ContainerStarted","Data":"f92482d5725bfe673d719ced4c2706a5a13f52f7fd51db89008dcf9539cc849c"} Jan 05 21:52:05 crc kubenswrapper[5000]: I0105 21:52:05.725402 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.727221 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.872818 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5759bb69bf-chpv9"] Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.878278 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.880626 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.882156 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.882262 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.901177 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5759bb69bf-chpv9"] Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.991395 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-public-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.991693 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-etc-swift\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.991744 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkcd\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-kube-api-access-9kkcd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.991800 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-run-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.991967 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-internal-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.992054 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-log-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.992084 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-combined-ca-bundle\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:06 crc kubenswrapper[5000]: I0105 21:52:06.992446 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-config-data\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094534 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-config-data\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094602 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-public-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094655 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-etc-swift\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094682 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kkcd\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-kube-api-access-9kkcd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094723 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-run-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094766 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-internal-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094834 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-log-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.094912 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-combined-ca-bundle\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.095985 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-log-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.096184 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3694130-425f-4455-9275-0899d204bc66-run-httpd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.103608 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-config-data\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.105544 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-etc-swift\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.106209 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-combined-ca-bundle\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.112599 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-internal-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.119291 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kkcd\" (UniqueName: \"kubernetes.io/projected/b3694130-425f-4455-9275-0899d204bc66-kube-api-access-9kkcd\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.121481 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3694130-425f-4455-9275-0899d204bc66-public-tls-certs\") pod \"swift-proxy-5759bb69bf-chpv9\" (UID: \"b3694130-425f-4455-9275-0899d204bc66\") " pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.200291 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:07 crc kubenswrapper[5000]: I0105 21:52:07.786874 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5759bb69bf-chpv9"] Jan 05 21:52:07 crc kubenswrapper[5000]: W0105 21:52:07.796999 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3694130_425f_4455_9275_0899d204bc66.slice/crio-85395adb129380b664aa6ad8f6addebddeeddec5a20a3586c9403e0b05b51ef6 WatchSource:0}: Error finding container 85395adb129380b664aa6ad8f6addebddeeddec5a20a3586c9403e0b05b51ef6: Status 404 returned error can't find the container with id 85395adb129380b664aa6ad8f6addebddeeddec5a20a3586c9403e0b05b51ef6 Jan 05 21:52:08 crc kubenswrapper[5000]: I0105 21:52:08.166017 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5759bb69bf-chpv9" event={"ID":"b3694130-425f-4455-9275-0899d204bc66","Type":"ContainerStarted","Data":"1abdc228f51b288079b808307589b3f5a58f5389ecf4812a199f673049751aa0"} Jan 05 21:52:08 crc kubenswrapper[5000]: I0105 21:52:08.166379 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5759bb69bf-chpv9" event={"ID":"b3694130-425f-4455-9275-0899d204bc66","Type":"ContainerStarted","Data":"85395adb129380b664aa6ad8f6addebddeeddec5a20a3586c9403e0b05b51ef6"} Jan 05 21:52:09 crc kubenswrapper[5000]: I0105 21:52:09.177715 5000 generic.go:334] "Generic (PLEG): container finished" podID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerID="8a08e6c6322151b56e75d3d98f89baddbc9e7db12cadfeee86ff569c7f8a2fd4" exitCode=137 Jan 05 21:52:09 crc kubenswrapper[5000]: I0105 21:52:09.178007 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerDied","Data":"8a08e6c6322151b56e75d3d98f89baddbc9e7db12cadfeee86ff569c7f8a2fd4"} Jan 05 21:52:09 crc kubenswrapper[5000]: I0105 21:52:09.179410 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5759bb69bf-chpv9" event={"ID":"b3694130-425f-4455-9275-0899d204bc66","Type":"ContainerStarted","Data":"fe49f9097b1dbf283b517f54dd15022843ed349eb62896ee4c5e6f9146d51ded"} Jan 05 21:52:09 crc kubenswrapper[5000]: I0105 21:52:09.180487 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:09 crc kubenswrapper[5000]: I0105 21:52:09.180510 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.350929 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5759bb69bf-chpv9" podStartSLOduration=8.350911659 podStartE2EDuration="8.350911659s" podCreationTimestamp="2026-01-05 21:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:09.201025733 +0000 UTC m=+1084.157228202" watchObservedRunningTime="2026-01-05 21:52:14.350911659 +0000 UTC m=+1089.307114128" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.358854 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.359112 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-log" containerID="cri-o://5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf" gracePeriod=30 Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.359519 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-httpd" containerID="cri-o://76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317" gracePeriod=30 Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.881642 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951052 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951124 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951169 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951204 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951253 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951270 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.951396 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgkgl\" (UniqueName: \"kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl\") pod \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\" (UID: \"77e33e26-6a57-4f48-9d16-3bb5502b1f76\") " Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.952229 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.952394 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.957442 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl" (OuterVolumeSpecName: "kube-api-access-bgkgl") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "kube-api-access-bgkgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:14 crc kubenswrapper[5000]: I0105 21:52:14.959595 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts" (OuterVolumeSpecName: "scripts") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.004335 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.013487 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.044618 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data" (OuterVolumeSpecName: "config-data") pod "77e33e26-6a57-4f48-9d16-3bb5502b1f76" (UID: "77e33e26-6a57-4f48-9d16-3bb5502b1f76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.054335 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059519 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77e33e26-6a57-4f48-9d16-3bb5502b1f76-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059555 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgkgl\" (UniqueName: \"kubernetes.io/projected/77e33e26-6a57-4f48-9d16-3bb5502b1f76-kube-api-access-bgkgl\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059574 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059586 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059599 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.059611 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e33e26-6a57-4f48-9d16-3bb5502b1f76-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.236221 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77e33e26-6a57-4f48-9d16-3bb5502b1f76","Type":"ContainerDied","Data":"4f3d2fcc8ab7accc13757566dcb6d31489397af2c8c6bde0a58d47e4724e911c"} Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.236277 5000 scope.go:117] "RemoveContainer" containerID="8a08e6c6322151b56e75d3d98f89baddbc9e7db12cadfeee86ff569c7f8a2fd4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.236300 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.238736 5000 generic.go:334] "Generic (PLEG): container finished" podID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerID="5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf" exitCode=143 Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.238787 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerDied","Data":"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf"} Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.241485 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"046f24d3-66d8-4a8b-bd20-d1f79426033b","Type":"ContainerStarted","Data":"b998863ca89581c79d9603523705a22b63d2a7dc0dc09eb4d86545177255a8c2"} Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.260551 5000 scope.go:117] "RemoveContainer" containerID="28226b1d0d41ad62538f1f8c07ede3fe51fe633fad414b174aa289b8ed538265" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.268870 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.714249973 podStartE2EDuration="12.268846287s" podCreationTimestamp="2026-01-05 21:52:03 +0000 UTC" firstStartedPulling="2026-01-05 21:52:04.139881975 +0000 UTC m=+1079.096084444" lastFinishedPulling="2026-01-05 21:52:14.694478289 +0000 UTC m=+1089.650680758" observedRunningTime="2026-01-05 21:52:15.26123053 +0000 UTC m=+1090.217432999" watchObservedRunningTime="2026-01-05 21:52:15.268846287 +0000 UTC m=+1090.225048756" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.284206 5000 scope.go:117] "RemoveContainer" containerID="a8c8b079ed669aad435f15885d78ccf11ab351d19b2b62085a1b352870fb5d13" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.320322 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.348454 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.354932 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:15 crc kubenswrapper[5000]: E0105 21:52:15.355316 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="ceilometer-notification-agent" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355328 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="ceilometer-notification-agent" Jan 05 21:52:15 crc kubenswrapper[5000]: E0105 21:52:15.355350 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="sg-core" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355357 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="sg-core" Jan 05 21:52:15 crc kubenswrapper[5000]: E0105 21:52:15.355385 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="proxy-httpd" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355393 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="proxy-httpd" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355543 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="sg-core" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355559 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="proxy-httpd" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.355567 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" containerName="ceilometer-notification-agent" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.358646 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.361326 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.361522 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.382634 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.408785 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p89b7"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.409993 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.419113 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p89b7"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467262 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467347 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpr4t\" (UniqueName: \"kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467386 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467453 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467474 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467573 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467642 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467663 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.467680 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46258\" (UniqueName: \"kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.484416 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jbj8l"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.485528 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.492461 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4112-account-create-update-lq2nf"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.495204 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.497857 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.501874 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jbj8l"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.509323 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4112-account-create-update-lq2nf"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.571832 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpr4t\" (UniqueName: \"kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.571904 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.571955 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.571978 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtzj\" (UniqueName: \"kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.571995 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572021 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572068 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572123 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572151 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572176 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572198 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46258\" (UniqueName: \"kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572223 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.572256 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8ss\" (UniqueName: \"kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.573043 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.574387 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.575012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.603147 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.603650 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.604470 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.607494 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46258\" (UniqueName: \"kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.609929 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data\") pod \"ceilometer-0\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.612691 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpr4t\" (UniqueName: \"kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t\") pod \"nova-api-db-create-p89b7\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.677375 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.677457 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8ss\" (UniqueName: \"kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.677526 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtzj\" (UniqueName: \"kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.677549 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.678285 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.678874 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.687285 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.711670 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtzj\" (UniqueName: \"kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj\") pod \"nova-api-4112-account-create-update-lq2nf\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.724977 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6tqm5"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.726458 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.727572 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8ss\" (UniqueName: \"kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss\") pod \"nova-cell0-db-create-jbj8l\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.732927 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-65d5455f76-k75ww" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.738300 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.756962 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7721-account-create-update-2dgl4"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.758388 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.784925 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.785052 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt6x\" (UniqueName: \"kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.785073 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jhf\" (UniqueName: \"kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.785092 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.785165 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6tqm5"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.786160 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.810319 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.822420 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.822827 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7721-account-create-update-2dgl4"] Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.887247 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqt6x\" (UniqueName: \"kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.887285 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jhf\" (UniqueName: \"kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.887313 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.887379 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.888691 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.888876 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.910873 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqt6x\" (UniqueName: \"kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x\") pod \"nova-cell0-7721-account-create-update-2dgl4\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.917996 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jhf\" (UniqueName: \"kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf\") pod \"nova-cell1-db-create-6tqm5\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.926838 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:15 crc kubenswrapper[5000]: I0105 21:52:15.971022 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.009167 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2d8a-account-create-update-qv4nh"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.010676 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.013737 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.040238 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2d8a-account-create-update-qv4nh"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.100686 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qlf\" (UniqueName: \"kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.100760 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.107452 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.204560 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qlf\" (UniqueName: \"kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.204639 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.205465 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.226725 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qlf\" (UniqueName: \"kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf\") pod \"nova-cell1-2d8a-account-create-update-qv4nh\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.314565 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.340585 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.477748 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p89b7"] Jan 05 21:52:16 crc kubenswrapper[5000]: W0105 21:52:16.480980 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53cad663_dcd9_47f8_ace9_a6376185c4e2.slice/crio-2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214 WatchSource:0}: Error finding container 2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214: Status 404 returned error can't find the container with id 2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214 Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.572872 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6tqm5"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.586747 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jbj8l"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.607351 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4112-account-create-update-lq2nf"] Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.616630 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7721-account-create-update-2dgl4"] Jan 05 21:52:16 crc kubenswrapper[5000]: W0105 21:52:16.653584 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf173d560_1627_41c6_a033_c1c58cc63647.slice/crio-7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271 WatchSource:0}: Error finding container 7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271: Status 404 returned error can't find the container with id 7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271 Jan 05 21:52:16 crc kubenswrapper[5000]: I0105 21:52:16.803128 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2d8a-account-create-update-qv4nh"] Jan 05 21:52:16 crc kubenswrapper[5000]: W0105 21:52:16.839306 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf386365d_31bf_463a_92d3_6b81c90b7786.slice/crio-e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7 WatchSource:0}: Error finding container e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7: Status 404 returned error can't find the container with id e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7 Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.206918 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.208025 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5759bb69bf-chpv9" Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.270758 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" event={"ID":"f173d560-1627-41c6-a033-c1c58cc63647","Type":"ContainerStarted","Data":"72b8de78bc990b250edced9fbf21e3e60192f17c5cba1c147812e64a79868ee1"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.270814 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" event={"ID":"f173d560-1627-41c6-a033-c1c58cc63647","Type":"ContainerStarted","Data":"7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.277556 5000 generic.go:334] "Generic (PLEG): container finished" podID="16f9ee45-7624-4137-aab8-7e6896acc26d" containerID="244da10ebe95372f1f4adde66b28a69c402733a8fa2affe052e5e13695766764" exitCode=0 Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.277694 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6tqm5" event={"ID":"16f9ee45-7624-4137-aab8-7e6896acc26d","Type":"ContainerDied","Data":"244da10ebe95372f1f4adde66b28a69c402733a8fa2affe052e5e13695766764"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.277721 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6tqm5" event={"ID":"16f9ee45-7624-4137-aab8-7e6896acc26d","Type":"ContainerStarted","Data":"1a17ced784ef7b3c4c1eea790a2ec38edc34af3748128796a677753a2442abcb"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.279414 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4112-account-create-update-lq2nf" event={"ID":"25fa678c-1863-4d63-8dde-0b3a03e1bfa5","Type":"ContainerStarted","Data":"809ff57659189c0aa7b9a59f8700d7f6a156a99880e466cb9eadff0bf3fc511a"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.279465 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4112-account-create-update-lq2nf" event={"ID":"25fa678c-1863-4d63-8dde-0b3a03e1bfa5","Type":"ContainerStarted","Data":"fe2b6aa28aba3e4eca516ba1246cdc4ee6d2b30ca36dd8c8ba4159729e099104"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.281664 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" event={"ID":"f386365d-31bf-463a-92d3-6b81c90b7786","Type":"ContainerStarted","Data":"7a0516c4dabca8f67371b346afd266261167f48564db9179cb52c1b48c873876"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.281710 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" event={"ID":"f386365d-31bf-463a-92d3-6b81c90b7786","Type":"ContainerStarted","Data":"e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.283552 5000 generic.go:334] "Generic (PLEG): container finished" podID="8a60e549-085d-42d0-baf7-df73fd417a77" containerID="301cfa2fcd0e82be3a7cc924d90d5db5aeba6fd1805f067a483271cd1d9b8146" exitCode=0 Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.283622 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbj8l" event={"ID":"8a60e549-085d-42d0-baf7-df73fd417a77","Type":"ContainerDied","Data":"301cfa2fcd0e82be3a7cc924d90d5db5aeba6fd1805f067a483271cd1d9b8146"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.283642 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbj8l" event={"ID":"8a60e549-085d-42d0-baf7-df73fd417a77","Type":"ContainerStarted","Data":"bb79cd86ce7d7fc746e657b032222a18e9f547da5fe452e38c80fe59af5cad1e"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.284880 5000 generic.go:334] "Generic (PLEG): container finished" podID="53cad663-dcd9-47f8-ace9-a6376185c4e2" containerID="2b739df2687959e1e0a43aa3cbce7c1c695b06cb81dfcf906d16a45782ca5d0b" exitCode=0 Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.285090 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p89b7" event={"ID":"53cad663-dcd9-47f8-ace9-a6376185c4e2","Type":"ContainerDied","Data":"2b739df2687959e1e0a43aa3cbce7c1c695b06cb81dfcf906d16a45782ca5d0b"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.285115 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p89b7" event={"ID":"53cad663-dcd9-47f8-ace9-a6376185c4e2","Type":"ContainerStarted","Data":"2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.290355 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerStarted","Data":"fc3a3fcffd375a70a46dce39709b2e0832bf79d400e2e35c8624645af231a8e3"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.290584 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerStarted","Data":"384be6085c0e8a1846a1d475f0ebed4b78cfa25fa11d2238c98f6b51968e1ac3"} Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.315006 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" podStartSLOduration=2.314989236 podStartE2EDuration="2.314989236s" podCreationTimestamp="2026-01-05 21:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:17.311749534 +0000 UTC m=+1092.267952003" watchObservedRunningTime="2026-01-05 21:52:17.314989236 +0000 UTC m=+1092.271191705" Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.342420 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e33e26-6a57-4f48-9d16-3bb5502b1f76" path="/var/lib/kubelet/pods/77e33e26-6a57-4f48-9d16-3bb5502b1f76/volumes" Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.371851 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" podStartSLOduration=2.371832446 podStartE2EDuration="2.371832446s" podCreationTimestamp="2026-01-05 21:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:17.370147998 +0000 UTC m=+1092.326350467" watchObservedRunningTime="2026-01-05 21:52:17.371832446 +0000 UTC m=+1092.328034905" Jan 05 21:52:17 crc kubenswrapper[5000]: I0105 21:52:17.389478 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-4112-account-create-update-lq2nf" podStartSLOduration=2.389462758 podStartE2EDuration="2.389462758s" podCreationTimestamp="2026-01-05 21:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:17.383387175 +0000 UTC m=+1092.339589654" watchObservedRunningTime="2026-01-05 21:52:17.389462758 +0000 UTC m=+1092.345665227" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.198159 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240061 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240489 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240557 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzg4h\" (UniqueName: \"kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240689 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240717 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240777 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240807 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.240849 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs\") pod \"2e11de54-ff33-4464-ab87-a565a688e5b5\" (UID: \"2e11de54-ff33-4464-ab87-a565a688e5b5\") " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.241721 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs" (OuterVolumeSpecName: "logs") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.256436 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.259098 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.260515 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts" (OuterVolumeSpecName: "scripts") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.271242 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h" (OuterVolumeSpecName: "kube-api-access-pzg4h") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "kube-api-access-pzg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.336762 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.346999 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerStarted","Data":"39a687792d64b2367380bb265054207b7b22d14b70c03a512d61a4001686544f"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354436 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354505 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354522 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354537 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354551 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e11de54-ff33-4464-ab87-a565a688e5b5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.354568 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzg4h\" (UniqueName: \"kubernetes.io/projected/2e11de54-ff33-4464-ab87-a565a688e5b5-kube-api-access-pzg4h\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.362069 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data" (OuterVolumeSpecName: "config-data") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.367911 5000 generic.go:334] "Generic (PLEG): container finished" podID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerID="76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317" exitCode=0 Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.368025 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerDied","Data":"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.368056 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2e11de54-ff33-4464-ab87-a565a688e5b5","Type":"ContainerDied","Data":"e7a6e620eecab8109d33efbaccb5eb31ad7b11a1c08ede39728959e5070cb633"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.368074 5000 scope.go:117] "RemoveContainer" containerID="76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.368330 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.380353 5000 generic.go:334] "Generic (PLEG): container finished" podID="f173d560-1627-41c6-a033-c1c58cc63647" containerID="72b8de78bc990b250edced9fbf21e3e60192f17c5cba1c147812e64a79868ee1" exitCode=0 Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.380597 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" event={"ID":"f173d560-1627-41c6-a033-c1c58cc63647","Type":"ContainerDied","Data":"72b8de78bc990b250edced9fbf21e3e60192f17c5cba1c147812e64a79868ee1"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.382228 5000 generic.go:334] "Generic (PLEG): container finished" podID="f386365d-31bf-463a-92d3-6b81c90b7786" containerID="7a0516c4dabca8f67371b346afd266261167f48564db9179cb52c1b48c873876" exitCode=0 Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.382304 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" event={"ID":"f386365d-31bf-463a-92d3-6b81c90b7786","Type":"ContainerDied","Data":"7a0516c4dabca8f67371b346afd266261167f48564db9179cb52c1b48c873876"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.383690 5000 generic.go:334] "Generic (PLEG): container finished" podID="25fa678c-1863-4d63-8dde-0b3a03e1bfa5" containerID="809ff57659189c0aa7b9a59f8700d7f6a156a99880e466cb9eadff0bf3fc511a" exitCode=0 Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.383846 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4112-account-create-update-lq2nf" event={"ID":"25fa678c-1863-4d63-8dde-0b3a03e1bfa5","Type":"ContainerDied","Data":"809ff57659189c0aa7b9a59f8700d7f6a156a99880e466cb9eadff0bf3fc511a"} Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.407090 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e11de54-ff33-4464-ab87-a565a688e5b5" (UID: "2e11de54-ff33-4464-ab87-a565a688e5b5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.455788 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.455825 5000 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e11de54-ff33-4464-ab87-a565a688e5b5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.471080 5000 scope.go:117] "RemoveContainer" containerID="5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.484925 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.503174 5000 scope.go:117] "RemoveContainer" containerID="76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317" Jan 05 21:52:18 crc kubenswrapper[5000]: E0105 21:52:18.503592 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317\": container with ID starting with 76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317 not found: ID does not exist" containerID="76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.503631 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317"} err="failed to get container status \"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317\": rpc error: code = NotFound desc = could not find container \"76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317\": container with ID starting with 76e09c4cbb3e238dcd2e7491cb919eb10fdf4583f80385175b8a5625a9b06317 not found: ID does not exist" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.503653 5000 scope.go:117] "RemoveContainer" containerID="5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf" Jan 05 21:52:18 crc kubenswrapper[5000]: E0105 21:52:18.504201 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf\": container with ID starting with 5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf not found: ID does not exist" containerID="5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.504238 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf"} err="failed to get container status \"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf\": rpc error: code = NotFound desc = could not find container \"5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf\": container with ID starting with 5b3e19ec85ab5d1bfc58b48b7c2c6760222c79cbb6bec39f4cba459ef7bca5cf not found: ID does not exist" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.566975 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.744388 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.758359 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.799864 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:18 crc kubenswrapper[5000]: E0105 21:52:18.800309 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-log" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.800321 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-log" Jan 05 21:52:18 crc kubenswrapper[5000]: E0105 21:52:18.800339 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-httpd" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.800344 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-httpd" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.800506 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-httpd" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.800522 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" containerName="glance-log" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.801453 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.803598 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.803980 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.808543 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975467 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-logs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975627 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5pp\" (UniqueName: \"kubernetes.io/projected/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-kube-api-access-8t5pp\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975678 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975708 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975735 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975759 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975790 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:18 crc kubenswrapper[5000]: I0105 21:52:18.975859 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.077786 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-logs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078166 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t5pp\" (UniqueName: \"kubernetes.io/projected/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-kube-api-access-8t5pp\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078197 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078221 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078241 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078257 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078276 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078296 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-logs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078323 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.078661 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.080258 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.087723 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.088401 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.114768 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.120952 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.126907 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t5pp\" (UniqueName: \"kubernetes.io/projected/62ae3bff-5f88-4662-86d4-0a4e1c51c8be-kube-api-access-8t5pp\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.161527 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"62ae3bff-5f88-4662-86d4-0a4e1c51c8be\") " pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.255872 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.261117 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.267840 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.339712 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e11de54-ff33-4464-ab87-a565a688e5b5" path="/var/lib/kubelet/pods/2e11de54-ff33-4464-ab87-a565a688e5b5/volumes" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.383675 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpr4t\" (UniqueName: \"kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t\") pod \"53cad663-dcd9-47f8-ace9-a6376185c4e2\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.384433 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts\") pod \"53cad663-dcd9-47f8-ace9-a6376185c4e2\" (UID: \"53cad663-dcd9-47f8-ace9-a6376185c4e2\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.384603 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk8ss\" (UniqueName: \"kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss\") pod \"8a60e549-085d-42d0-baf7-df73fd417a77\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.384689 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts\") pod \"16f9ee45-7624-4137-aab8-7e6896acc26d\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.384750 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jhf\" (UniqueName: \"kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf\") pod \"16f9ee45-7624-4137-aab8-7e6896acc26d\" (UID: \"16f9ee45-7624-4137-aab8-7e6896acc26d\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.384782 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts\") pod \"8a60e549-085d-42d0-baf7-df73fd417a77\" (UID: \"8a60e549-085d-42d0-baf7-df73fd417a77\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.385850 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a60e549-085d-42d0-baf7-df73fd417a77" (UID: "8a60e549-085d-42d0-baf7-df73fd417a77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.385848 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53cad663-dcd9-47f8-ace9-a6376185c4e2" (UID: "53cad663-dcd9-47f8-ace9-a6376185c4e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.386184 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16f9ee45-7624-4137-aab8-7e6896acc26d" (UID: "16f9ee45-7624-4137-aab8-7e6896acc26d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.389793 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t" (OuterVolumeSpecName: "kube-api-access-qpr4t") pod "53cad663-dcd9-47f8-ace9-a6376185c4e2" (UID: "53cad663-dcd9-47f8-ace9-a6376185c4e2"). InnerVolumeSpecName "kube-api-access-qpr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.390999 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss" (OuterVolumeSpecName: "kube-api-access-zk8ss") pod "8a60e549-085d-42d0-baf7-df73fd417a77" (UID: "8a60e549-085d-42d0-baf7-df73fd417a77"). InnerVolumeSpecName "kube-api-access-zk8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.400799 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbj8l" event={"ID":"8a60e549-085d-42d0-baf7-df73fd417a77","Type":"ContainerDied","Data":"bb79cd86ce7d7fc746e657b032222a18e9f547da5fe452e38c80fe59af5cad1e"} Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.400850 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb79cd86ce7d7fc746e657b032222a18e9f547da5fe452e38c80fe59af5cad1e" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.400921 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbj8l" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.406971 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf" (OuterVolumeSpecName: "kube-api-access-29jhf") pod "16f9ee45-7624-4137-aab8-7e6896acc26d" (UID: "16f9ee45-7624-4137-aab8-7e6896acc26d"). InnerVolumeSpecName "kube-api-access-29jhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.407432 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6tqm5" event={"ID":"16f9ee45-7624-4137-aab8-7e6896acc26d","Type":"ContainerDied","Data":"1a17ced784ef7b3c4c1eea790a2ec38edc34af3748128796a677753a2442abcb"} Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.407472 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a17ced784ef7b3c4c1eea790a2ec38edc34af3748128796a677753a2442abcb" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.407531 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6tqm5" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.414533 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p89b7" event={"ID":"53cad663-dcd9-47f8-ace9-a6376185c4e2","Type":"ContainerDied","Data":"2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214"} Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.414566 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8c5e863df01cfc946cdab6f0b6cf6cdd4e40746e6ef51ec38442155fd0d214" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.414616 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p89b7" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.417725 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerStarted","Data":"8dd07f684fa16bb86d8c0855474803c0ac10476b5f2797e5b5c0e01816ddd1e7"} Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.434746 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486535 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jhf\" (UniqueName: \"kubernetes.io/projected/16f9ee45-7624-4137-aab8-7e6896acc26d-kube-api-access-29jhf\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486565 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a60e549-085d-42d0-baf7-df73fd417a77-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486575 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpr4t\" (UniqueName: \"kubernetes.io/projected/53cad663-dcd9-47f8-ace9-a6376185c4e2-kube-api-access-qpr4t\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486584 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53cad663-dcd9-47f8-ace9-a6376185c4e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486592 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk8ss\" (UniqueName: \"kubernetes.io/projected/8a60e549-085d-42d0-baf7-df73fd417a77-kube-api-access-zk8ss\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.486601 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f9ee45-7624-4137-aab8-7e6896acc26d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.891479 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.998410 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts\") pod \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " Jan 05 21:52:19 crc kubenswrapper[5000]: I0105 21:52:19.998515 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtzj\" (UniqueName: \"kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj\") pod \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\" (UID: \"25fa678c-1863-4d63-8dde-0b3a03e1bfa5\") " Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.000249 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25fa678c-1863-4d63-8dde-0b3a03e1bfa5" (UID: "25fa678c-1863-4d63-8dde-0b3a03e1bfa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.002850 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj" (OuterVolumeSpecName: "kube-api-access-cwtzj") pod "25fa678c-1863-4d63-8dde-0b3a03e1bfa5" (UID: "25fa678c-1863-4d63-8dde-0b3a03e1bfa5"). InnerVolumeSpecName "kube-api-access-cwtzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.031430 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.043152 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.103146 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.103181 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtzj\" (UniqueName: \"kubernetes.io/projected/25fa678c-1863-4d63-8dde-0b3a03e1bfa5-kube-api-access-cwtzj\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.139444 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.204561 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqt6x\" (UniqueName: \"kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x\") pod \"f173d560-1627-41c6-a033-c1c58cc63647\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.204646 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts\") pod \"f173d560-1627-41c6-a033-c1c58cc63647\" (UID: \"f173d560-1627-41c6-a033-c1c58cc63647\") " Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.204833 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts\") pod \"f386365d-31bf-463a-92d3-6b81c90b7786\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.204860 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57qlf\" (UniqueName: \"kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf\") pod \"f386365d-31bf-463a-92d3-6b81c90b7786\" (UID: \"f386365d-31bf-463a-92d3-6b81c90b7786\") " Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.205435 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f386365d-31bf-463a-92d3-6b81c90b7786" (UID: "f386365d-31bf-463a-92d3-6b81c90b7786"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.205551 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f173d560-1627-41c6-a033-c1c58cc63647" (UID: "f173d560-1627-41c6-a033-c1c58cc63647"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.205917 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f386365d-31bf-463a-92d3-6b81c90b7786-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.205942 5000 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f173d560-1627-41c6-a033-c1c58cc63647-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.208705 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf" (OuterVolumeSpecName: "kube-api-access-57qlf") pod "f386365d-31bf-463a-92d3-6b81c90b7786" (UID: "f386365d-31bf-463a-92d3-6b81c90b7786"). InnerVolumeSpecName "kube-api-access-57qlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.209066 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x" (OuterVolumeSpecName: "kube-api-access-wqt6x") pod "f173d560-1627-41c6-a033-c1c58cc63647" (UID: "f173d560-1627-41c6-a033-c1c58cc63647"). InnerVolumeSpecName "kube-api-access-wqt6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.309241 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqt6x\" (UniqueName: \"kubernetes.io/projected/f173d560-1627-41c6-a033-c1c58cc63647-kube-api-access-wqt6x\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.309645 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57qlf\" (UniqueName: \"kubernetes.io/projected/f386365d-31bf-463a-92d3-6b81c90b7786-kube-api-access-57qlf\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.436189 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" event={"ID":"f386365d-31bf-463a-92d3-6b81c90b7786","Type":"ContainerDied","Data":"e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7"} Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.436423 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10dde0c306faeb9915a4f925d538695fcca32351a34cbe36f4cc4a2689f7fc7" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.436594 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2d8a-account-create-update-qv4nh" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.463390 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4112-account-create-update-lq2nf" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.465112 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4112-account-create-update-lq2nf" event={"ID":"25fa678c-1863-4d63-8dde-0b3a03e1bfa5","Type":"ContainerDied","Data":"fe2b6aa28aba3e4eca516ba1246cdc4ee6d2b30ca36dd8c8ba4159729e099104"} Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.465169 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2b6aa28aba3e4eca516ba1246cdc4ee6d2b30ca36dd8c8ba4159729e099104" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.483549 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62ae3bff-5f88-4662-86d4-0a4e1c51c8be","Type":"ContainerStarted","Data":"78c952cd8a9894321e766f277c961e97fb68fbdd70f0ff7565694edde7cdffe9"} Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.490703 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" event={"ID":"f173d560-1627-41c6-a033-c1c58cc63647","Type":"ContainerDied","Data":"7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271"} Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.491188 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7721-account-create-update-2dgl4" Jan 05 21:52:20 crc kubenswrapper[5000]: I0105 21:52:20.490791 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7669c00bbbf29c2f747a3f69cf234c59cc3ef376f4d7d4e75de8dbe6478ab271" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.022947 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qtjzr"] Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023786 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a60e549-085d-42d0-baf7-df73fd417a77" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023799 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a60e549-085d-42d0-baf7-df73fd417a77" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023812 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fa678c-1863-4d63-8dde-0b3a03e1bfa5" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023818 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fa678c-1863-4d63-8dde-0b3a03e1bfa5" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023836 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f173d560-1627-41c6-a033-c1c58cc63647" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023842 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f173d560-1627-41c6-a033-c1c58cc63647" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023852 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f9ee45-7624-4137-aab8-7e6896acc26d" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023857 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f9ee45-7624-4137-aab8-7e6896acc26d" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023878 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f386365d-31bf-463a-92d3-6b81c90b7786" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023884 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f386365d-31bf-463a-92d3-6b81c90b7786" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.023909 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cad663-dcd9-47f8-ace9-a6376185c4e2" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.023915 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cad663-dcd9-47f8-ace9-a6376185c4e2" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024088 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cad663-dcd9-47f8-ace9-a6376185c4e2" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024102 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a60e549-085d-42d0-baf7-df73fd417a77" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024113 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f173d560-1627-41c6-a033-c1c58cc63647" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024127 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f386365d-31bf-463a-92d3-6b81c90b7786" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024139 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f9ee45-7624-4137-aab8-7e6896acc26d" containerName="mariadb-database-create" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024145 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fa678c-1863-4d63-8dde-0b3a03e1bfa5" containerName="mariadb-account-create-update" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.024677 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.026385 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.026594 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.026936 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x4dmg" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.029232 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qtjzr"] Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.130167 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.223880 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.225843 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.225967 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.226119 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzkq\" (UniqueName: \"kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330084 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330351 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330395 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330419 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330460 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330498 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330525 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xgh\" (UniqueName: \"kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh\") pod \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\" (UID: \"e000bdc7-d544-4dfe-ab2e-6c43a7453748\") " Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330646 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzkq\" (UniqueName: \"kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330673 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330721 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.330774 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.336154 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs" (OuterVolumeSpecName: "logs") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.339114 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh" (OuterVolumeSpecName: "kube-api-access-j8xgh") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "kube-api-access-j8xgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.340555 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.351358 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.351622 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.353319 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.358264 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzkq\" (UniqueName: \"kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq\") pod \"nova-cell0-conductor-db-sync-qtjzr\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.371523 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data" (OuterVolumeSpecName: "config-data") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.378200 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts" (OuterVolumeSpecName: "scripts") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.386302 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.411191 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e000bdc7-d544-4dfe-ab2e-6c43a7453748" (UID: "e000bdc7-d544-4dfe-ab2e-6c43a7453748"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431639 5000 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431689 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xgh\" (UniqueName: \"kubernetes.io/projected/e000bdc7-d544-4dfe-ab2e-6c43a7453748-kube-api-access-j8xgh\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431701 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431712 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431724 5000 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e000bdc7-d544-4dfe-ab2e-6c43a7453748-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431733 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e000bdc7-d544-4dfe-ab2e-6c43a7453748-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.431742 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e000bdc7-d544-4dfe-ab2e-6c43a7453748-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.442912 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.530617 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerStarted","Data":"ae354d12f934e65d833d3bcf4a04e03baea045cefa252e9e49b4f98a89150f03"} Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.531122 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-central-agent" containerID="cri-o://fc3a3fcffd375a70a46dce39709b2e0832bf79d400e2e35c8624645af231a8e3" gracePeriod=30 Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.531420 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.531721 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="proxy-httpd" containerID="cri-o://ae354d12f934e65d833d3bcf4a04e03baea045cefa252e9e49b4f98a89150f03" gracePeriod=30 Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.531767 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="sg-core" containerID="cri-o://8dd07f684fa16bb86d8c0855474803c0ac10476b5f2797e5b5c0e01816ddd1e7" gracePeriod=30 Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.531809 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-notification-agent" containerID="cri-o://39a687792d64b2367380bb265054207b7b22d14b70c03a512d61a4001686544f" gracePeriod=30 Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.557927 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62ae3bff-5f88-4662-86d4-0a4e1c51c8be","Type":"ContainerStarted","Data":"3040216e3cb7bc361842a8867ee32bd94b5b9e4a188debc4c1ed0ce21c7ada3c"} Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.586346 5000 generic.go:334] "Generic (PLEG): container finished" podID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerID="2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85" exitCode=137 Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.586380 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerDied","Data":"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85"} Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.586400 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65d5455f76-k75ww" event={"ID":"e000bdc7-d544-4dfe-ab2e-6c43a7453748","Type":"ContainerDied","Data":"c5e1e95e590028c083713ec1d7479b91b6cfc18a9604764c9af2a321c40a3b73"} Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.586417 5000 scope.go:117] "RemoveContainer" containerID="d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.586568 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65d5455f76-k75ww" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.625469 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.50987793 podStartE2EDuration="6.625453091s" podCreationTimestamp="2026-01-05 21:52:15 +0000 UTC" firstStartedPulling="2026-01-05 21:52:16.315143694 +0000 UTC m=+1091.271346163" lastFinishedPulling="2026-01-05 21:52:20.430718855 +0000 UTC m=+1095.386921324" observedRunningTime="2026-01-05 21:52:21.574558851 +0000 UTC m=+1096.530761340" watchObservedRunningTime="2026-01-05 21:52:21.625453091 +0000 UTC m=+1096.581655560" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.638443 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.644081 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65d5455f76-k75ww"] Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.847603 5000 scope.go:117] "RemoveContainer" containerID="2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.904377 5000 scope.go:117] "RemoveContainer" containerID="d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.904936 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769\": container with ID starting with d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769 not found: ID does not exist" containerID="d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.904986 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769"} err="failed to get container status \"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769\": rpc error: code = NotFound desc = could not find container \"d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769\": container with ID starting with d96fceace8ba67a8696e1baf1bcacdfd1837094a7e25764234e2ee39c7437769 not found: ID does not exist" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.905010 5000 scope.go:117] "RemoveContainer" containerID="2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85" Jan 05 21:52:21 crc kubenswrapper[5000]: E0105 21:52:21.905403 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85\": container with ID starting with 2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85 not found: ID does not exist" containerID="2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.905435 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85"} err="failed to get container status \"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85\": rpc error: code = NotFound desc = could not find container \"2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85\": container with ID starting with 2bc68cc6f289e4695987859a861fc71e979fb05f30cd34067e711b63a3b9ff85 not found: ID does not exist" Jan 05 21:52:21 crc kubenswrapper[5000]: I0105 21:52:21.936695 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qtjzr"] Jan 05 21:52:21 crc kubenswrapper[5000]: W0105 21:52:21.945511 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfadbba38_e7c5_464a_99d9_7895875ab04b.slice/crio-03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab WatchSource:0}: Error finding container 03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab: Status 404 returned error can't find the container with id 03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601286 5000 generic.go:334] "Generic (PLEG): container finished" podID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerID="ae354d12f934e65d833d3bcf4a04e03baea045cefa252e9e49b4f98a89150f03" exitCode=0 Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601559 5000 generic.go:334] "Generic (PLEG): container finished" podID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerID="8dd07f684fa16bb86d8c0855474803c0ac10476b5f2797e5b5c0e01816ddd1e7" exitCode=2 Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601566 5000 generic.go:334] "Generic (PLEG): container finished" podID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerID="39a687792d64b2367380bb265054207b7b22d14b70c03a512d61a4001686544f" exitCode=0 Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601597 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerDied","Data":"ae354d12f934e65d833d3bcf4a04e03baea045cefa252e9e49b4f98a89150f03"} Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601619 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerDied","Data":"8dd07f684fa16bb86d8c0855474803c0ac10476b5f2797e5b5c0e01816ddd1e7"} Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.601627 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerDied","Data":"39a687792d64b2367380bb265054207b7b22d14b70c03a512d61a4001686544f"} Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.603076 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62ae3bff-5f88-4662-86d4-0a4e1c51c8be","Type":"ContainerStarted","Data":"db7d79b18fd7b0faecf4053d2ad9ffd45a49262efe8473a07c7c69e3b24e33d6"} Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.604805 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" event={"ID":"fadbba38-e7c5-464a-99d9-7895875ab04b","Type":"ContainerStarted","Data":"03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab"} Jan 05 21:52:22 crc kubenswrapper[5000]: I0105 21:52:22.626113 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.626098697 podStartE2EDuration="4.626098697s" podCreationTimestamp="2026-01-05 21:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:22.62200207 +0000 UTC m=+1097.578204559" watchObservedRunningTime="2026-01-05 21:52:22.626098697 +0000 UTC m=+1097.582301166" Jan 05 21:52:23 crc kubenswrapper[5000]: I0105 21:52:23.333461 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" path="/var/lib/kubelet/pods/e000bdc7-d544-4dfe-ab2e-6c43a7453748/volumes" Jan 05 21:52:23 crc kubenswrapper[5000]: I0105 21:52:23.729319 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:23 crc kubenswrapper[5000]: I0105 21:52:23.729612 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-log" containerID="cri-o://0862ebb35357628e4f45ec8191b9d13ac2aa66d190788e495228e650f091c797" gracePeriod=30 Jan 05 21:52:23 crc kubenswrapper[5000]: I0105 21:52:23.729774 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-httpd" containerID="cri-o://6c3d93618a51e9b4bda0c46cb1a773e10adf685afcae798cc74db5624e5ca8cc" gracePeriod=30 Jan 05 21:52:24 crc kubenswrapper[5000]: I0105 21:52:24.625200 5000 generic.go:334] "Generic (PLEG): container finished" podID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerID="0862ebb35357628e4f45ec8191b9d13ac2aa66d190788e495228e650f091c797" exitCode=143 Jan 05 21:52:24 crc kubenswrapper[5000]: I0105 21:52:24.625297 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerDied","Data":"0862ebb35357628e4f45ec8191b9d13ac2aa66d190788e495228e650f091c797"} Jan 05 21:52:26 crc kubenswrapper[5000]: I0105 21:52:26.651086 5000 generic.go:334] "Generic (PLEG): container finished" podID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerID="fc3a3fcffd375a70a46dce39709b2e0832bf79d400e2e35c8624645af231a8e3" exitCode=0 Jan 05 21:52:26 crc kubenswrapper[5000]: I0105 21:52:26.651120 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerDied","Data":"fc3a3fcffd375a70a46dce39709b2e0832bf79d400e2e35c8624645af231a8e3"} Jan 05 21:52:27 crc kubenswrapper[5000]: I0105 21:52:27.671552 5000 generic.go:334] "Generic (PLEG): container finished" podID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerID="6c3d93618a51e9b4bda0c46cb1a773e10adf685afcae798cc74db5624e5ca8cc" exitCode=0 Jan 05 21:52:27 crc kubenswrapper[5000]: I0105 21:52:27.671596 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerDied","Data":"6c3d93618a51e9b4bda0c46cb1a773e10adf685afcae798cc74db5624e5ca8cc"} Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.435586 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.435991 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.463335 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.474004 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.696916 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:29 crc kubenswrapper[5000]: I0105 21:52:29.697367 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.378767 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.451988 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547588 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547642 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46258\" (UniqueName: \"kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547677 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547699 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547722 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547742 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547758 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547779 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6nsl\" (UniqueName: \"kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547818 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547919 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547935 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547972 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.547992 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd\") pod \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\" (UID: \"b70ecf5c-ed13-4825-a0ab-ae258235b3bf\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.548041 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.548090 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run\") pod \"a784c52f-445a-4e50-8e93-3197d01b0f01\" (UID: \"a784c52f-445a-4e50-8e93-3197d01b0f01\") " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.548283 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.548438 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.550241 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.550368 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs" (OuterVolumeSpecName: "logs") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.551585 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.558698 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.563101 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258" (OuterVolumeSpecName: "kube-api-access-46258") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "kube-api-access-46258". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.571362 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl" (OuterVolumeSpecName: "kube-api-access-f6nsl") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "kube-api-access-f6nsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.575027 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts" (OuterVolumeSpecName: "scripts") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.585521 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts" (OuterVolumeSpecName: "scripts") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649900 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649926 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46258\" (UniqueName: \"kubernetes.io/projected/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-kube-api-access-46258\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649938 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649949 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6nsl\" (UniqueName: \"kubernetes.io/projected/a784c52f-445a-4e50-8e93-3197d01b0f01-kube-api-access-f6nsl\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649970 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649981 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649989 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.649999 5000 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a784c52f-445a-4e50-8e93-3197d01b0f01-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.653689 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.653799 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.662010 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.666119 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data" (OuterVolumeSpecName: "config-data") pod "a784c52f-445a-4e50-8e93-3197d01b0f01" (UID: "a784c52f-445a-4e50-8e93-3197d01b0f01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.695288 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.716562 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" event={"ID":"fadbba38-e7c5-464a-99d9-7895875ab04b","Type":"ContainerStarted","Data":"4cb9561782447b7d5e3a3f65af9f0601af25b11c187cb16b76f6b811ae82cd8e"} Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.727400 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b70ecf5c-ed13-4825-a0ab-ae258235b3bf","Type":"ContainerDied","Data":"384be6085c0e8a1846a1d475f0ebed4b78cfa25fa11d2238c98f6b51968e1ac3"} Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.727486 5000 scope.go:117] "RemoveContainer" containerID="ae354d12f934e65d833d3bcf4a04e03baea045cefa252e9e49b4f98a89150f03" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.727665 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.738972 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a784c52f-445a-4e50-8e93-3197d01b0f01","Type":"ContainerDied","Data":"389feab859c1c14907d5dae3dd4a5aeda269ac039807e1e75027dff69e1a1b06"} Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.741991 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.742728 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" podStartSLOduration=2.6338437470000002 podStartE2EDuration="10.742705516s" podCreationTimestamp="2026-01-05 21:52:20 +0000 UTC" firstStartedPulling="2026-01-05 21:52:21.947988643 +0000 UTC m=+1096.904191112" lastFinishedPulling="2026-01-05 21:52:30.056850412 +0000 UTC m=+1105.013052881" observedRunningTime="2026-01-05 21:52:30.738172397 +0000 UTC m=+1105.694374886" watchObservedRunningTime="2026-01-05 21:52:30.742705516 +0000 UTC m=+1105.698907985" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.753778 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.753813 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.753823 5000 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.753833 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a784c52f-445a-4e50-8e93-3197d01b0f01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.753841 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.760412 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data" (OuterVolumeSpecName: "config-data") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.768869 5000 scope.go:117] "RemoveContainer" containerID="8dd07f684fa16bb86d8c0855474803c0ac10476b5f2797e5b5c0e01816ddd1e7" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.796646 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.800021 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.803095 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b70ecf5c-ed13-4825-a0ab-ae258235b3bf" (UID: "b70ecf5c-ed13-4825-a0ab-ae258235b3bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812004 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812363 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-notification-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812376 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-notification-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812394 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon-log" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812401 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon-log" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812412 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="sg-core" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812418 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="sg-core" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812433 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812453 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812462 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="proxy-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812467 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="proxy-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812480 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-log" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812486 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-log" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812499 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-central-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812505 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-central-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: E0105 21:52:30.812523 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812530 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812682 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon-log" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812697 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-notification-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812708 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="ceilometer-central-agent" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812717 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="sg-core" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812727 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" containerName="proxy-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812740 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e000bdc7-d544-4dfe-ab2e-6c43a7453748" containerName="horizon" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812749 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-httpd" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.812759 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" containerName="glance-log" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.814088 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.816113 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.817972 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.819008 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.841306 5000 scope.go:117] "RemoveContainer" containerID="39a687792d64b2367380bb265054207b7b22d14b70c03a512d61a4001686544f" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.856304 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.856345 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70ecf5c-ed13-4825-a0ab-ae258235b3bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.879984 5000 scope.go:117] "RemoveContainer" containerID="fc3a3fcffd375a70a46dce39709b2e0832bf79d400e2e35c8624645af231a8e3" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.906342 5000 scope.go:117] "RemoveContainer" containerID="6c3d93618a51e9b4bda0c46cb1a773e10adf685afcae798cc74db5624e5ca8cc" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.929071 5000 scope.go:117] "RemoveContainer" containerID="0862ebb35357628e4f45ec8191b9d13ac2aa66d190788e495228e650f091c797" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.957264 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-config-data\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.958116 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-scripts\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.958263 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.958439 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.958529 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.958943 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-logs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.959139 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:30 crc kubenswrapper[5000]: I0105 21:52:30.959414 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdl44\" (UniqueName: \"kubernetes.io/projected/8587a6fa-051f-4c91-bb39-6c9bb628adbb-kube-api-access-qdl44\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062459 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062571 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdl44\" (UniqueName: \"kubernetes.io/projected/8587a6fa-051f-4c91-bb39-6c9bb628adbb-kube-api-access-qdl44\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062654 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-config-data\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062697 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-scripts\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062723 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062755 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062786 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.062811 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-logs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.063781 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-logs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.065912 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.074882 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.080206 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-config-data\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.080655 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.087324 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.088815 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.088855 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8587a6fa-051f-4c91-bb39-6c9bb628adbb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.101127 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdl44\" (UniqueName: \"kubernetes.io/projected/8587a6fa-051f-4c91-bb39-6c9bb628adbb-kube-api-access-qdl44\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.102742 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.103979 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8587a6fa-051f-4c91-bb39-6c9bb628adbb-scripts\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.104777 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.112775 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.118206 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.119951 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.161940 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8587a6fa-051f-4c91-bb39-6c9bb628adbb\") " pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.163883 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.163997 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfctk\" (UniqueName: \"kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.164029 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.164071 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.164159 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.164198 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.164245 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265448 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265536 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265577 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265611 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265664 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfctk\" (UniqueName: \"kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265685 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.265710 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.266163 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.267286 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.270372 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.270649 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.271329 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.271483 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.287701 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfctk\" (UniqueName: \"kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk\") pod \"ceilometer-0\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.335745 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a784c52f-445a-4e50-8e93-3197d01b0f01" path="/var/lib/kubelet/pods/a784c52f-445a-4e50-8e93-3197d01b0f01/volumes" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.336614 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70ecf5c-ed13-4825-a0ab-ae258235b3bf" path="/var/lib/kubelet/pods/b70ecf5c-ed13-4825-a0ab-ae258235b3bf/volumes" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.443553 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.548944 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.816999 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.817393 5000 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 05 21:52:31 crc kubenswrapper[5000]: I0105 21:52:31.869931 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 05 21:52:31 crc kubenswrapper[5000]: W0105 21:52:31.873087 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8587a6fa_051f_4c91_bb39_6c9bb628adbb.slice/crio-a633280d106656afe5e50a91dab124da6abbac48e3655ebc4ea889264efacab6 WatchSource:0}: Error finding container a633280d106656afe5e50a91dab124da6abbac48e3655ebc4ea889264efacab6: Status 404 returned error can't find the container with id a633280d106656afe5e50a91dab124da6abbac48e3655ebc4ea889264efacab6 Jan 05 21:52:32 crc kubenswrapper[5000]: I0105 21:52:32.123333 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:32 crc kubenswrapper[5000]: I0105 21:52:32.300949 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 05 21:52:32 crc kubenswrapper[5000]: I0105 21:52:32.782214 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8587a6fa-051f-4c91-bb39-6c9bb628adbb","Type":"ContainerStarted","Data":"40f13aa459f1e64a43fe96562a13e53a670c33f6d0764f234d2c2aedd8c60e33"} Jan 05 21:52:32 crc kubenswrapper[5000]: I0105 21:52:32.782295 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8587a6fa-051f-4c91-bb39-6c9bb628adbb","Type":"ContainerStarted","Data":"a633280d106656afe5e50a91dab124da6abbac48e3655ebc4ea889264efacab6"} Jan 05 21:52:32 crc kubenswrapper[5000]: I0105 21:52:32.784074 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerStarted","Data":"730351c8825f684b6b0206146a859a9a0924f9627dabcc28a2fe4d5b9f6e0c75"} Jan 05 21:52:33 crc kubenswrapper[5000]: I0105 21:52:33.793867 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerStarted","Data":"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5"} Jan 05 21:52:33 crc kubenswrapper[5000]: I0105 21:52:33.795672 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8587a6fa-051f-4c91-bb39-6c9bb628adbb","Type":"ContainerStarted","Data":"8160f922633fb9acdde719d79dfb94a899b14085f40211ff27df9644526b20e5"} Jan 05 21:52:33 crc kubenswrapper[5000]: I0105 21:52:33.864482 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8644570270000003 podStartE2EDuration="3.864457027s" podCreationTimestamp="2026-01-05 21:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:33.856099129 +0000 UTC m=+1108.812301618" watchObservedRunningTime="2026-01-05 21:52:33.864457027 +0000 UTC m=+1108.820659496" Jan 05 21:52:34 crc kubenswrapper[5000]: I0105 21:52:34.805926 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerStarted","Data":"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9"} Jan 05 21:52:34 crc kubenswrapper[5000]: I0105 21:52:34.806522 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerStarted","Data":"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3"} Jan 05 21:52:35 crc kubenswrapper[5000]: I0105 21:52:35.828317 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerStarted","Data":"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0"} Jan 05 21:52:35 crc kubenswrapper[5000]: I0105 21:52:35.829643 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:52:35 crc kubenswrapper[5000]: I0105 21:52:35.848047 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.609843053 podStartE2EDuration="4.848025372s" podCreationTimestamp="2026-01-05 21:52:31 +0000 UTC" firstStartedPulling="2026-01-05 21:52:32.198350897 +0000 UTC m=+1107.154553366" lastFinishedPulling="2026-01-05 21:52:35.436533216 +0000 UTC m=+1110.392735685" observedRunningTime="2026-01-05 21:52:35.846448907 +0000 UTC m=+1110.802651396" watchObservedRunningTime="2026-01-05 21:52:35.848025372 +0000 UTC m=+1110.804227841" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.443943 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.444568 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.484130 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.488109 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.882187 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 21:52:41 crc kubenswrapper[5000]: I0105 21:52:41.882555 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 05 21:52:43 crc kubenswrapper[5000]: I0105 21:52:43.802473 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 21:52:43 crc kubenswrapper[5000]: I0105 21:52:43.803449 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.753617 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.756088 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-central-agent" containerID="cri-o://8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" gracePeriod=30 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.756208 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="proxy-httpd" containerID="cri-o://e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" gracePeriod=30 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.756256 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="sg-core" containerID="cri-o://c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" gracePeriod=30 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.756299 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-notification-agent" containerID="cri-o://a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" gracePeriod=30 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.908594 5000 generic.go:334] "Generic (PLEG): container finished" podID="fadbba38-e7c5-464a-99d9-7895875ab04b" containerID="4cb9561782447b7d5e3a3f65af9f0601af25b11c187cb16b76f6b811ae82cd8e" exitCode=0 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.908699 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" event={"ID":"fadbba38-e7c5-464a-99d9-7895875ab04b","Type":"ContainerDied","Data":"4cb9561782447b7d5e3a3f65af9f0601af25b11c187cb16b76f6b811ae82cd8e"} Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.915107 5000 generic.go:334] "Generic (PLEG): container finished" podID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerID="e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" exitCode=0 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.915179 5000 generic.go:334] "Generic (PLEG): container finished" podID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerID="c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" exitCode=2 Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.915162 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerDied","Data":"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0"} Jan 05 21:52:44 crc kubenswrapper[5000]: I0105 21:52:44.915252 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerDied","Data":"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9"} Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.915398 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933558 5000 generic.go:334] "Generic (PLEG): container finished" podID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerID="a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" exitCode=0 Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933589 5000 generic.go:334] "Generic (PLEG): container finished" podID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerID="8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" exitCode=0 Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933632 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933791 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerDied","Data":"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3"} Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933818 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerDied","Data":"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5"} Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933828 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feba2f99-8642-4f0b-920d-0620e8ef4b81","Type":"ContainerDied","Data":"730351c8825f684b6b0206146a859a9a0924f9627dabcc28a2fe4d5b9f6e0c75"} Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.933842 5000 scope.go:117] "RemoveContainer" containerID="e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.977332 5000 scope.go:117] "RemoveContainer" containerID="c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" Jan 05 21:52:45 crc kubenswrapper[5000]: I0105 21:52:45.999059 5000 scope.go:117] "RemoveContainer" containerID="a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.024705 5000 scope.go:117] "RemoveContainer" containerID="8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.045293 5000 scope.go:117] "RemoveContainer" containerID="e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.045795 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0\": container with ID starting with e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0 not found: ID does not exist" containerID="e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.045831 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0"} err="failed to get container status \"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0\": rpc error: code = NotFound desc = could not find container \"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0\": container with ID starting with e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.045856 5000 scope.go:117] "RemoveContainer" containerID="c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.046262 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9\": container with ID starting with c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9 not found: ID does not exist" containerID="c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.046280 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9"} err="failed to get container status \"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9\": rpc error: code = NotFound desc = could not find container \"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9\": container with ID starting with c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.046294 5000 scope.go:117] "RemoveContainer" containerID="a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.046992 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3\": container with ID starting with a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3 not found: ID does not exist" containerID="a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047033 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3"} err="failed to get container status \"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3\": rpc error: code = NotFound desc = could not find container \"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3\": container with ID starting with a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047059 5000 scope.go:117] "RemoveContainer" containerID="8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.047345 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5\": container with ID starting with 8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5 not found: ID does not exist" containerID="8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047365 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5"} err="failed to get container status \"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5\": rpc error: code = NotFound desc = could not find container \"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5\": container with ID starting with 8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047378 5000 scope.go:117] "RemoveContainer" containerID="e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047584 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0"} err="failed to get container status \"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0\": rpc error: code = NotFound desc = could not find container \"e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0\": container with ID starting with e4dd11618dfa2827de27ece40cc5cd8fc71271a55d278e659a8dba319c097cc0 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047602 5000 scope.go:117] "RemoveContainer" containerID="c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047763 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9"} err="failed to get container status \"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9\": rpc error: code = NotFound desc = could not find container \"c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9\": container with ID starting with c1daa3762d044e3959d87c042541761c9db80e9f45f670ad41e61b5061a4d7e9 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.047784 5000 scope.go:117] "RemoveContainer" containerID="a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.048017 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3"} err="failed to get container status \"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3\": rpc error: code = NotFound desc = could not find container \"a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3\": container with ID starting with a34a222866da9e855dc9366903e3c792d5bcf14d1b35eb9491701c6347328eb3 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.048035 5000 scope.go:117] "RemoveContainer" containerID="8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.048652 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5"} err="failed to get container status \"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5\": rpc error: code = NotFound desc = could not find container \"8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5\": container with ID starting with 8fce6d7d8cbfd23e61ca88a5df2cf68a7c35d5e8798f58747090eadfff1f5ba5 not found: ID does not exist" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.085639 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.085760 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.085805 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfctk\" (UniqueName: \"kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.085828 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.085854 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.086153 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.086343 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.086740 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.086803 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml\") pod \"feba2f99-8642-4f0b-920d-0620e8ef4b81\" (UID: \"feba2f99-8642-4f0b-920d-0620e8ef4b81\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.087300 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.087312 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feba2f99-8642-4f0b-920d-0620e8ef4b81-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.091676 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts" (OuterVolumeSpecName: "scripts") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.094208 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk" (OuterVolumeSpecName: "kube-api-access-cfctk") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "kube-api-access-cfctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.112662 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.183775 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.188960 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfctk\" (UniqueName: \"kubernetes.io/projected/feba2f99-8642-4f0b-920d-0620e8ef4b81-kube-api-access-cfctk\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.188991 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.189003 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.189016 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.210168 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data" (OuterVolumeSpecName: "config-data") pod "feba2f99-8642-4f0b-920d-0620e8ef4b81" (UID: "feba2f99-8642-4f0b-920d-0620e8ef4b81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.271477 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.289156 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.290308 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feba2f99-8642-4f0b-920d-0620e8ef4b81-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.298956 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.344800 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.345293 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="proxy-httpd" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345309 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="proxy-httpd" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.345328 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadbba38-e7c5-464a-99d9-7895875ab04b" containerName="nova-cell0-conductor-db-sync" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345337 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadbba38-e7c5-464a-99d9-7895875ab04b" containerName="nova-cell0-conductor-db-sync" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.345357 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-central-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345365 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-central-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.345385 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-notification-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345392 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-notification-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: E0105 21:52:46.345403 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="sg-core" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345410 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="sg-core" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345638 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="proxy-httpd" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345658 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-central-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345667 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="sg-core" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345682 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" containerName="ceilometer-notification-agent" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.345702 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadbba38-e7c5-464a-99d9-7895875ab04b" containerName="nova-cell0-conductor-db-sync" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.348709 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.354446 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.354627 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.357100 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.391763 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dzkq\" (UniqueName: \"kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq\") pod \"fadbba38-e7c5-464a-99d9-7895875ab04b\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.391816 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts\") pod \"fadbba38-e7c5-464a-99d9-7895875ab04b\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.392028 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle\") pod \"fadbba38-e7c5-464a-99d9-7895875ab04b\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.392082 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data\") pod \"fadbba38-e7c5-464a-99d9-7895875ab04b\" (UID: \"fadbba38-e7c5-464a-99d9-7895875ab04b\") " Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.397096 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts" (OuterVolumeSpecName: "scripts") pod "fadbba38-e7c5-464a-99d9-7895875ab04b" (UID: "fadbba38-e7c5-464a-99d9-7895875ab04b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.397647 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq" (OuterVolumeSpecName: "kube-api-access-4dzkq") pod "fadbba38-e7c5-464a-99d9-7895875ab04b" (UID: "fadbba38-e7c5-464a-99d9-7895875ab04b"). InnerVolumeSpecName "kube-api-access-4dzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.418095 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data" (OuterVolumeSpecName: "config-data") pod "fadbba38-e7c5-464a-99d9-7895875ab04b" (UID: "fadbba38-e7c5-464a-99d9-7895875ab04b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.420160 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fadbba38-e7c5-464a-99d9-7895875ab04b" (UID: "fadbba38-e7c5-464a-99d9-7895875ab04b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.494184 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.494617 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.494635 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.494837 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.494870 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495127 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495234 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-755qx\" (UniqueName: \"kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495445 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dzkq\" (UniqueName: \"kubernetes.io/projected/fadbba38-e7c5-464a-99d9-7895875ab04b-kube-api-access-4dzkq\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495462 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495475 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.495484 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fadbba38-e7c5-464a-99d9-7895875ab04b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597451 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597501 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-755qx\" (UniqueName: \"kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597549 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597578 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597595 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597636 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.597649 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.598127 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.598476 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.602732 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.602955 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.603089 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.603625 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.615875 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-755qx\" (UniqueName: \"kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx\") pod \"ceilometer-0\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.676780 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.952068 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" event={"ID":"fadbba38-e7c5-464a-99d9-7895875ab04b","Type":"ContainerDied","Data":"03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab"} Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.952155 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f2e055212211fcc8b17af99306c99c21202ddf39d24cb336d4f20e8a819dab" Jan 05 21:52:46 crc kubenswrapper[5000]: I0105 21:52:46.952118 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qtjzr" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.030297 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.031431 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.034008 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x4dmg" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.034132 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.040218 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.105499 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.118553 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.118790 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrzv\" (UniqueName: \"kubernetes.io/projected/3c91798a-921c-4031-8e5f-0752bebcc325-kube-api-access-kwrzv\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.118829 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.220847 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.221087 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwrzv\" (UniqueName: \"kubernetes.io/projected/3c91798a-921c-4031-8e5f-0752bebcc325-kube-api-access-kwrzv\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.221138 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.225472 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.229859 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c91798a-921c-4031-8e5f-0752bebcc325-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.237554 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwrzv\" (UniqueName: \"kubernetes.io/projected/3c91798a-921c-4031-8e5f-0752bebcc325-kube-api-access-kwrzv\") pod \"nova-cell0-conductor-0\" (UID: \"3c91798a-921c-4031-8e5f-0752bebcc325\") " pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.334101 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feba2f99-8642-4f0b-920d-0620e8ef4b81" path="/var/lib/kubelet/pods/feba2f99-8642-4f0b-920d-0620e8ef4b81/volumes" Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.344428 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:47 crc kubenswrapper[5000]: W0105 21:52:47.797753 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c91798a_921c_4031_8e5f_0752bebcc325.slice/crio-fb0ad6679bfa8a57e202eb778f7372a66cd357d7ac3c0d7e51b498ffca54a010 WatchSource:0}: Error finding container fb0ad6679bfa8a57e202eb778f7372a66cd357d7ac3c0d7e51b498ffca54a010: Status 404 returned error can't find the container with id fb0ad6679bfa8a57e202eb778f7372a66cd357d7ac3c0d7e51b498ffca54a010 Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.800757 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.962958 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerStarted","Data":"197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af"} Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.963002 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerStarted","Data":"a7f58f0535c675acf296b50b34f03729e9a4eff161d68028c98b2deebc5697cb"} Jan 05 21:52:47 crc kubenswrapper[5000]: I0105 21:52:47.964485 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c91798a-921c-4031-8e5f-0752bebcc325","Type":"ContainerStarted","Data":"fb0ad6679bfa8a57e202eb778f7372a66cd357d7ac3c0d7e51b498ffca54a010"} Jan 05 21:52:48 crc kubenswrapper[5000]: I0105 21:52:48.977412 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerStarted","Data":"c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835"} Jan 05 21:52:48 crc kubenswrapper[5000]: I0105 21:52:48.979056 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3c91798a-921c-4031-8e5f-0752bebcc325","Type":"ContainerStarted","Data":"a54224342883deab4a1ccb441482a851acd1d310803538b50546efecb8a2e22a"} Jan 05 21:52:48 crc kubenswrapper[5000]: I0105 21:52:48.979215 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:50 crc kubenswrapper[5000]: I0105 21:52:50.000979 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerStarted","Data":"737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85"} Jan 05 21:52:51 crc kubenswrapper[5000]: I0105 21:52:51.008294 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerStarted","Data":"7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae"} Jan 05 21:52:51 crc kubenswrapper[5000]: I0105 21:52:51.009592 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:52:51 crc kubenswrapper[5000]: I0105 21:52:51.026935 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.817395534 podStartE2EDuration="5.026917576s" podCreationTimestamp="2026-01-05 21:52:46 +0000 UTC" firstStartedPulling="2026-01-05 21:52:47.125663591 +0000 UTC m=+1122.081866060" lastFinishedPulling="2026-01-05 21:52:50.335185613 +0000 UTC m=+1125.291388102" observedRunningTime="2026-01-05 21:52:51.024684122 +0000 UTC m=+1125.980886611" watchObservedRunningTime="2026-01-05 21:52:51.026917576 +0000 UTC m=+1125.983120065" Jan 05 21:52:51 crc kubenswrapper[5000]: I0105 21:52:51.027605 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.027598885 podStartE2EDuration="4.027598885s" podCreationTimestamp="2026-01-05 21:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:52:49.001649311 +0000 UTC m=+1123.957851800" watchObservedRunningTime="2026-01-05 21:52:51.027598885 +0000 UTC m=+1125.983801354" Jan 05 21:52:53 crc kubenswrapper[5000]: I0105 21:52:53.099339 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:52:53 crc kubenswrapper[5000]: I0105 21:52:53.100482 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:52:57 crc kubenswrapper[5000]: I0105 21:52:57.383166 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 05 21:52:57 crc kubenswrapper[5000]: I0105 21:52:57.992114 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-sxtrz"] Jan 05 21:52:57 crc kubenswrapper[5000]: I0105 21:52:57.993435 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:57 crc kubenswrapper[5000]: I0105 21:52:57.995159 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 05 21:52:57 crc kubenswrapper[5000]: I0105 21:52:57.995964 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.005902 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sxtrz"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.089909 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.090021 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4bg2\" (UniqueName: \"kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.090110 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.090228 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.165163 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.171359 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.179527 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.192130 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.194553 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.194753 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4bg2\" (UniqueName: \"kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.194782 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.196059 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.204645 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.204817 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.226274 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.228839 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.231460 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.236334 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.240536 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4bg2\" (UniqueName: \"kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2\") pod \"nova-cell0-cell-mapping-sxtrz\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.248445 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.296651 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.297515 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.297728 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpq5\" (UniqueName: \"kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.297819 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.297966 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tkv\" (UniqueName: \"kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.298071 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.298166 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.302510 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.303837 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.305926 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.313790 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.381484 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400642 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400717 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpq5\" (UniqueName: \"kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400746 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400761 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxnt\" (UniqueName: \"kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400811 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400870 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tkv\" (UniqueName: \"kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400914 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400934 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400958 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.400977 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.407559 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.409109 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.419917 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.421342 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.422395 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.425705 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.427499 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.444694 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.445340 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpq5\" (UniqueName: \"kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.463500 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tkv\" (UniqueName: \"kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv\") pod \"nova-api-0\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.494776 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.503926 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504005 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504041 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxnt\" (UniqueName: \"kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504093 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504122 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504153 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sm6v\" (UniqueName: \"kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.504219 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.510446 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.511493 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.522956 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.549782 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxnt\" (UniqueName: \"kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt\") pod \"nova-scheduler-0\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.571020 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.572952 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.598219 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.610290 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.610697 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sm6v\" (UniqueName: \"kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.610850 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.610978 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611188 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611231 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611261 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611434 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphqg\" (UniqueName: \"kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611460 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.611527 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.612062 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.615325 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.621540 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.631012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sm6v\" (UniqueName: \"kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v\") pod \"nova-metadata-0\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.631119 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.712860 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.713208 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.713235 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.713260 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.713320 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphqg\" (UniqueName: \"kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.713355 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.714169 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.714251 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.715560 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.715741 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.716091 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.732959 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphqg\" (UniqueName: \"kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg\") pod \"dnsmasq-dns-757b4f8459-lx9p4\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.809368 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.844420 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:52:58 crc kubenswrapper[5000]: I0105 21:52:58.909589 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.097743 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.099182 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4a24a0_0380_498f_87b9_3e3b2e0915d5.slice/crio-6e3448c1e7bfcfc7ae79c3f4b9f6aca9ca4e4385b5accdf8aa2b5f4bf6bb0302 WatchSource:0}: Error finding container 6e3448c1e7bfcfc7ae79c3f4b9f6aca9ca4e4385b5accdf8aa2b5f4bf6bb0302: Status 404 returned error can't find the container with id 6e3448c1e7bfcfc7ae79c3f4b9f6aca9ca4e4385b5accdf8aa2b5f4bf6bb0302 Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.114879 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sxtrz"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.115205 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b371d36_3b35_4109_965b_98343703594b.slice/crio-d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad WatchSource:0}: Error finding container d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad: Status 404 returned error can't find the container with id d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.187594 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8mr65"] Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.188748 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.191244 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.192843 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.205632 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8mr65"] Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.227917 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.227997 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-schw5\" (UniqueName: \"kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.228054 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.228126 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.241471 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.243046 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefe496a_c80d_4c13_b084_38073098dbb3.slice/crio-e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34 WatchSource:0}: Error finding container e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34: Status 404 returned error can't find the container with id e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34 Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.328900 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.329039 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.329075 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-schw5\" (UniqueName: \"kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.329117 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.337000 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.352291 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.360046 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.390531 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-schw5\" (UniqueName: \"kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5\") pod \"nova-cell1-conductor-db-sync-8mr65\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.416254 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.434242 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.437332 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6f3b8f_72ba_42f4_bf4b_be7cb41c8a47.slice/crio-353b690f8adf6a3faea2e19bd58b07ee16a13eedfd77e1b20b9c97c3c8103be1 WatchSource:0}: Error finding container 353b690f8adf6a3faea2e19bd58b07ee16a13eedfd77e1b20b9c97c3c8103be1: Status 404 returned error can't find the container with id 353b690f8adf6a3faea2e19bd58b07ee16a13eedfd77e1b20b9c97c3c8103be1 Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.503584 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.551086 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.552483 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46f047d_9a56_424d_a65b_5c9327eaa03d.slice/crio-bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25 WatchSource:0}: Error finding container bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25: Status 404 returned error can't find the container with id bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25 Jan 05 21:52:59 crc kubenswrapper[5000]: I0105 21:52:59.809052 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8mr65"] Jan 05 21:52:59 crc kubenswrapper[5000]: W0105 21:52:59.817151 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod656c76b1_9f0a_4d3c_8a5b_dc5e823b8641.slice/crio-968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e WatchSource:0}: Error finding container 968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e: Status 404 returned error can't find the container with id 968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.092705 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47","Type":"ContainerStarted","Data":"353b690f8adf6a3faea2e19bd58b07ee16a13eedfd77e1b20b9c97c3c8103be1"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.095323 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sxtrz" event={"ID":"5b371d36-3b35-4109-965b-98343703594b","Type":"ContainerStarted","Data":"215b27dfcaa871064ed3737bef3a54e03624499cd43beb7f21f5d5d92ae2250a"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.095360 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sxtrz" event={"ID":"5b371d36-3b35-4109-965b-98343703594b","Type":"ContainerStarted","Data":"d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.099436 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerStarted","Data":"e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.103160 5000 generic.go:334] "Generic (PLEG): container finished" podID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerID="c2cd52adf8ca381cb68c9fbf1e51a9e9be2de43a151b13fd0b1aa7d2dab350e2" exitCode=0 Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.103226 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" event={"ID":"a46f047d-9a56-424d-a65b-5c9327eaa03d","Type":"ContainerDied","Data":"c2cd52adf8ca381cb68c9fbf1e51a9e9be2de43a151b13fd0b1aa7d2dab350e2"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.103251 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" event={"ID":"a46f047d-9a56-424d-a65b-5c9327eaa03d","Type":"ContainerStarted","Data":"bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.112740 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8mr65" event={"ID":"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641","Type":"ContainerStarted","Data":"1fa816145ff9efc4595553cdb6b93d241ebf70062d65280242035298a33c791e"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.112806 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8mr65" event={"ID":"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641","Type":"ContainerStarted","Data":"968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.115940 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa4a24a0-0380-498f-87b9-3e3b2e0915d5","Type":"ContainerStarted","Data":"6e3448c1e7bfcfc7ae79c3f4b9f6aca9ca4e4385b5accdf8aa2b5f4bf6bb0302"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.121593 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerStarted","Data":"221f37d711957f4380cf34c928580ec9702f964d7d789e15420884da1dba4029"} Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.128844 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-sxtrz" podStartSLOduration=3.128821962 podStartE2EDuration="3.128821962s" podCreationTimestamp="2026-01-05 21:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:00.1097928 +0000 UTC m=+1135.065995269" watchObservedRunningTime="2026-01-05 21:53:00.128821962 +0000 UTC m=+1135.085024431" Jan 05 21:53:00 crc kubenswrapper[5000]: I0105 21:53:00.162699 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8mr65" podStartSLOduration=1.162680977 podStartE2EDuration="1.162680977s" podCreationTimestamp="2026-01-05 21:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:00.147205076 +0000 UTC m=+1135.103407545" watchObservedRunningTime="2026-01-05 21:53:00.162680977 +0000 UTC m=+1135.118883446" Jan 05 21:53:01 crc kubenswrapper[5000]: I0105 21:53:01.145254 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" event={"ID":"a46f047d-9a56-424d-a65b-5c9327eaa03d","Type":"ContainerStarted","Data":"af62ebcffaf50ea090758b452cc8f8eb625daf69dbeaddfd5644b8213c377b6f"} Jan 05 21:53:01 crc kubenswrapper[5000]: I0105 21:53:01.145567 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:53:01 crc kubenswrapper[5000]: I0105 21:53:01.167447 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" podStartSLOduration=3.16743007 podStartE2EDuration="3.16743007s" podCreationTimestamp="2026-01-05 21:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:01.166573936 +0000 UTC m=+1136.122776425" watchObservedRunningTime="2026-01-05 21:53:01.16743007 +0000 UTC m=+1136.123632539" Jan 05 21:53:01 crc kubenswrapper[5000]: I0105 21:53:01.954363 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:01 crc kubenswrapper[5000]: I0105 21:53:01.966943 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.166502 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerStarted","Data":"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.167159 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerStarted","Data":"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.170951 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa4a24a0-0380-498f-87b9-3e3b2e0915d5","Type":"ContainerStarted","Data":"c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.171075 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7" gracePeriod=30 Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.175522 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerStarted","Data":"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.175581 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerStarted","Data":"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.175799 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-log" containerID="cri-o://abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" gracePeriod=30 Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.176115 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-metadata" containerID="cri-o://898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" gracePeriod=30 Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.182167 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47","Type":"ContainerStarted","Data":"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4"} Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.189647 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.274905405 podStartE2EDuration="5.189624856s" podCreationTimestamp="2026-01-05 21:52:58 +0000 UTC" firstStartedPulling="2026-01-05 21:52:59.245292645 +0000 UTC m=+1134.201495104" lastFinishedPulling="2026-01-05 21:53:02.160012086 +0000 UTC m=+1137.116214555" observedRunningTime="2026-01-05 21:53:03.182409 +0000 UTC m=+1138.138611489" watchObservedRunningTime="2026-01-05 21:53:03.189624856 +0000 UTC m=+1138.145827335" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.205951 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.152027925 podStartE2EDuration="5.205926991s" podCreationTimestamp="2026-01-05 21:52:58 +0000 UTC" firstStartedPulling="2026-01-05 21:52:59.104414351 +0000 UTC m=+1134.060616820" lastFinishedPulling="2026-01-05 21:53:02.158313417 +0000 UTC m=+1137.114515886" observedRunningTime="2026-01-05 21:53:03.200341221 +0000 UTC m=+1138.156543700" watchObservedRunningTime="2026-01-05 21:53:03.205926991 +0000 UTC m=+1138.162129460" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.230708 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.513952337 podStartE2EDuration="5.230684236s" podCreationTimestamp="2026-01-05 21:52:58 +0000 UTC" firstStartedPulling="2026-01-05 21:52:59.447025304 +0000 UTC m=+1134.403227783" lastFinishedPulling="2026-01-05 21:53:02.163757213 +0000 UTC m=+1137.119959682" observedRunningTime="2026-01-05 21:53:03.222320928 +0000 UTC m=+1138.178523397" watchObservedRunningTime="2026-01-05 21:53:03.230684236 +0000 UTC m=+1138.186886705" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.272503 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.539549928 podStartE2EDuration="5.272474897s" podCreationTimestamp="2026-01-05 21:52:58 +0000 UTC" firstStartedPulling="2026-01-05 21:52:59.42440889 +0000 UTC m=+1134.380611359" lastFinishedPulling="2026-01-05 21:53:02.157333859 +0000 UTC m=+1137.113536328" observedRunningTime="2026-01-05 21:53:03.25572908 +0000 UTC m=+1138.211931549" watchObservedRunningTime="2026-01-05 21:53:03.272474897 +0000 UTC m=+1138.228677366" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.497072 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.809686 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.845013 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:03 crc kubenswrapper[5000]: I0105 21:53:03.845084 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.163014 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194033 5000 generic.go:334] "Generic (PLEG): container finished" podID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerID="898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" exitCode=0 Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194058 5000 generic.go:334] "Generic (PLEG): container finished" podID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerID="abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" exitCode=143 Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194145 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194215 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerDied","Data":"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35"} Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194252 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerDied","Data":"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519"} Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194280 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24dfc7af-5831-4b3e-af98-624d4da25ee8","Type":"ContainerDied","Data":"221f37d711957f4380cf34c928580ec9702f964d7d789e15420884da1dba4029"} Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.194299 5000 scope.go:117] "RemoveContainer" containerID="898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.220228 5000 scope.go:117] "RemoveContainer" containerID="abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.239582 5000 scope.go:117] "RemoveContainer" containerID="898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" Jan 05 21:53:04 crc kubenswrapper[5000]: E0105 21:53:04.240155 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35\": container with ID starting with 898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35 not found: ID does not exist" containerID="898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.240197 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35"} err="failed to get container status \"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35\": rpc error: code = NotFound desc = could not find container \"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35\": container with ID starting with 898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35 not found: ID does not exist" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.240223 5000 scope.go:117] "RemoveContainer" containerID="abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" Jan 05 21:53:04 crc kubenswrapper[5000]: E0105 21:53:04.240926 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519\": container with ID starting with abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519 not found: ID does not exist" containerID="abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.240964 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519"} err="failed to get container status \"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519\": rpc error: code = NotFound desc = could not find container \"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519\": container with ID starting with abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519 not found: ID does not exist" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.240990 5000 scope.go:117] "RemoveContainer" containerID="898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.241291 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35"} err="failed to get container status \"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35\": rpc error: code = NotFound desc = could not find container \"898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35\": container with ID starting with 898983fc6690e926b014ffd64a7d722b0a1131f0c7e2c6add82bb79d45ca4a35 not found: ID does not exist" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.241314 5000 scope.go:117] "RemoveContainer" containerID="abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.241668 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519"} err="failed to get container status \"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519\": rpc error: code = NotFound desc = could not find container \"abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519\": container with ID starting with abb7a87ebf8e1f7be6be168e957f9a2b8fd0c47a67547c386f3515472a3d3519 not found: ID does not exist" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.341839 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data\") pod \"24dfc7af-5831-4b3e-af98-624d4da25ee8\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.341906 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs\") pod \"24dfc7af-5831-4b3e-af98-624d4da25ee8\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.342051 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle\") pod \"24dfc7af-5831-4b3e-af98-624d4da25ee8\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.342166 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sm6v\" (UniqueName: \"kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v\") pod \"24dfc7af-5831-4b3e-af98-624d4da25ee8\" (UID: \"24dfc7af-5831-4b3e-af98-624d4da25ee8\") " Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.344314 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs" (OuterVolumeSpecName: "logs") pod "24dfc7af-5831-4b3e-af98-624d4da25ee8" (UID: "24dfc7af-5831-4b3e-af98-624d4da25ee8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.362772 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v" (OuterVolumeSpecName: "kube-api-access-5sm6v") pod "24dfc7af-5831-4b3e-af98-624d4da25ee8" (UID: "24dfc7af-5831-4b3e-af98-624d4da25ee8"). InnerVolumeSpecName "kube-api-access-5sm6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.368751 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24dfc7af-5831-4b3e-af98-624d4da25ee8" (UID: "24dfc7af-5831-4b3e-af98-624d4da25ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.371970 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data" (OuterVolumeSpecName: "config-data") pod "24dfc7af-5831-4b3e-af98-624d4da25ee8" (UID: "24dfc7af-5831-4b3e-af98-624d4da25ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.444538 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.444573 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sm6v\" (UniqueName: \"kubernetes.io/projected/24dfc7af-5831-4b3e-af98-624d4da25ee8-kube-api-access-5sm6v\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.444583 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dfc7af-5831-4b3e-af98-624d4da25ee8-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.444591 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24dfc7af-5831-4b3e-af98-624d4da25ee8-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.537259 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.552592 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.572708 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:04 crc kubenswrapper[5000]: E0105 21:53:04.573099 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-log" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.573116 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-log" Jan 05 21:53:04 crc kubenswrapper[5000]: E0105 21:53:04.573138 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-metadata" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.573145 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-metadata" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.573334 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-log" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.573359 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" containerName="nova-metadata-metadata" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.574350 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.584551 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.585628 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.603003 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.749799 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.749926 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.749967 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gs5\" (UniqueName: \"kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.750080 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.750106 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.852619 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.853027 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.853204 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.853473 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.853722 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.853977 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6gs5\" (UniqueName: \"kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.856428 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.860291 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.867712 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.871004 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6gs5\" (UniqueName: \"kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5\") pod \"nova-metadata-0\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " pod="openstack/nova-metadata-0" Jan 05 21:53:04 crc kubenswrapper[5000]: I0105 21:53:04.935614 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:05 crc kubenswrapper[5000]: I0105 21:53:05.340009 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dfc7af-5831-4b3e-af98-624d4da25ee8" path="/var/lib/kubelet/pods/24dfc7af-5831-4b3e-af98-624d4da25ee8/volumes" Jan 05 21:53:05 crc kubenswrapper[5000]: I0105 21:53:05.405037 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:06 crc kubenswrapper[5000]: I0105 21:53:06.225956 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerStarted","Data":"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9"} Jan 05 21:53:06 crc kubenswrapper[5000]: I0105 21:53:06.226238 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerStarted","Data":"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d"} Jan 05 21:53:06 crc kubenswrapper[5000]: I0105 21:53:06.226342 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerStarted","Data":"cea8a8c841024f4bc68c5ee9bde0c774d449fd0b9fcac05d41666d60700c1023"} Jan 05 21:53:06 crc kubenswrapper[5000]: I0105 21:53:06.255603 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.255584008 podStartE2EDuration="2.255584008s" podCreationTimestamp="2026-01-05 21:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:06.247321992 +0000 UTC m=+1141.203524471" watchObservedRunningTime="2026-01-05 21:53:06.255584008 +0000 UTC m=+1141.211786487" Jan 05 21:53:07 crc kubenswrapper[5000]: I0105 21:53:07.242037 5000 generic.go:334] "Generic (PLEG): container finished" podID="5b371d36-3b35-4109-965b-98343703594b" containerID="215b27dfcaa871064ed3737bef3a54e03624499cd43beb7f21f5d5d92ae2250a" exitCode=0 Jan 05 21:53:07 crc kubenswrapper[5000]: I0105 21:53:07.242116 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sxtrz" event={"ID":"5b371d36-3b35-4109-965b-98343703594b","Type":"ContainerDied","Data":"215b27dfcaa871064ed3737bef3a54e03624499cd43beb7f21f5d5d92ae2250a"} Jan 05 21:53:07 crc kubenswrapper[5000]: I0105 21:53:07.244710 5000 generic.go:334] "Generic (PLEG): container finished" podID="656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" containerID="1fa816145ff9efc4595553cdb6b93d241ebf70062d65280242035298a33c791e" exitCode=0 Jan 05 21:53:07 crc kubenswrapper[5000]: I0105 21:53:07.244810 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8mr65" event={"ID":"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641","Type":"ContainerDied","Data":"1fa816145ff9efc4595553cdb6b93d241ebf70062d65280242035298a33c791e"} Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.631933 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.632355 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.746190 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.751249 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.809478 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.838501 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.911052 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.935779 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-schw5\" (UniqueName: \"kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5\") pod \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.935933 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle\") pod \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.935977 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data\") pod \"5b371d36-3b35-4109-965b-98343703594b\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.936031 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data\") pod \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.936106 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4bg2\" (UniqueName: \"kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2\") pod \"5b371d36-3b35-4109-965b-98343703594b\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.936152 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle\") pod \"5b371d36-3b35-4109-965b-98343703594b\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.936996 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts\") pod \"5b371d36-3b35-4109-965b-98343703594b\" (UID: \"5b371d36-3b35-4109-965b-98343703594b\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.937023 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts\") pod \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\" (UID: \"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641\") " Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.942476 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5" (OuterVolumeSpecName: "kube-api-access-schw5") pod "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" (UID: "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641"). InnerVolumeSpecName "kube-api-access-schw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.942520 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2" (OuterVolumeSpecName: "kube-api-access-x4bg2") pod "5b371d36-3b35-4109-965b-98343703594b" (UID: "5b371d36-3b35-4109-965b-98343703594b"). InnerVolumeSpecName "kube-api-access-x4bg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.943963 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts" (OuterVolumeSpecName: "scripts") pod "5b371d36-3b35-4109-965b-98343703594b" (UID: "5b371d36-3b35-4109-965b-98343703594b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.945228 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts" (OuterVolumeSpecName: "scripts") pod "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" (UID: "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.981841 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" (UID: "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.983493 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:53:08 crc kubenswrapper[5000]: I0105 21:53:08.983713 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="dnsmasq-dns" containerID="cri-o://356fe08f62620158b7489eb2ff279c54719bb52b2d93daab2de4106c7342bf92" gracePeriod=10 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.001214 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data" (OuterVolumeSpecName: "config-data") pod "5b371d36-3b35-4109-965b-98343703594b" (UID: "5b371d36-3b35-4109-965b-98343703594b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.004149 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data" (OuterVolumeSpecName: "config-data") pod "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" (UID: "656c76b1-9f0a-4d3c-8a5b-dc5e823b8641"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.014565 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b371d36-3b35-4109-965b-98343703594b" (UID: "5b371d36-3b35-4109-965b-98343703594b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.039929 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.039969 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4bg2\" (UniqueName: \"kubernetes.io/projected/5b371d36-3b35-4109-965b-98343703594b-kube-api-access-x4bg2\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.039981 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.039989 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.039997 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.040007 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-schw5\" (UniqueName: \"kubernetes.io/projected/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-kube-api-access-schw5\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.040016 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.040026 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b371d36-3b35-4109-965b-98343703594b-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.290107 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sxtrz" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.290323 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sxtrz" event={"ID":"5b371d36-3b35-4109-965b-98343703594b","Type":"ContainerDied","Data":"d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad"} Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.294322 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06f3e0f2a6219943585a71e7c8c8f73274e7db0c5142698a32babbb72ff4fad" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.295853 5000 generic.go:334] "Generic (PLEG): container finished" podID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerID="356fe08f62620158b7489eb2ff279c54719bb52b2d93daab2de4106c7342bf92" exitCode=0 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.295938 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" event={"ID":"2d8e5c82-8a89-4455-9c90-f69a8442822d","Type":"ContainerDied","Data":"356fe08f62620158b7489eb2ff279c54719bb52b2d93daab2de4106c7342bf92"} Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.309182 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8mr65" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.309934 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8mr65" event={"ID":"656c76b1-9f0a-4d3c-8a5b-dc5e823b8641","Type":"ContainerDied","Data":"968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e"} Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.309986 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968c392d06c9e5c392d0832f21ea4423d3938a5063cda47ec4ca5a54f603c38e" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.376909 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.409749 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 21:53:09 crc kubenswrapper[5000]: E0105 21:53:09.413829 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b371d36-3b35-4109-965b-98343703594b" containerName="nova-manage" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.413852 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b371d36-3b35-4109-965b-98343703594b" containerName="nova-manage" Jan 05 21:53:09 crc kubenswrapper[5000]: E0105 21:53:09.413878 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" containerName="nova-cell1-conductor-db-sync" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.413884 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" containerName="nova-cell1-conductor-db-sync" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.414073 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b371d36-3b35-4109-965b-98343703594b" containerName="nova-manage" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.414100 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" containerName="nova-cell1-conductor-db-sync" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.414658 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.423982 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.424338 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.459266 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.557088 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.557366 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-log" containerID="cri-o://b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035" gracePeriod=30 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.557412 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-api" containerID="cri-o://e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14" gracePeriod=30 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.562960 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.563132 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567644 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567695 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567719 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567799 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cklsh\" (UniqueName: \"kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567866 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.567931 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb\") pod \"2d8e5c82-8a89-4455-9c90-f69a8442822d\" (UID: \"2d8e5c82-8a89-4455-9c90-f69a8442822d\") " Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.568164 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.568206 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp469\" (UniqueName: \"kubernetes.io/projected/f341f64a-418c-4790-a14a-fc9768d6fc82-kube-api-access-qp469\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.568240 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.584067 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh" (OuterVolumeSpecName: "kube-api-access-cklsh") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "kube-api-access-cklsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.588567 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.588784 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-log" containerID="cri-o://38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" gracePeriod=30 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.589031 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-metadata" containerID="cri-o://40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" gracePeriod=30 Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.649959 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config" (OuterVolumeSpecName: "config") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.658284 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.659457 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669518 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669577 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp469\" (UniqueName: \"kubernetes.io/projected/f341f64a-418c-4790-a14a-fc9768d6fc82-kube-api-access-qp469\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669619 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669671 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669682 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cklsh\" (UniqueName: \"kubernetes.io/projected/2d8e5c82-8a89-4455-9c90-f69a8442822d-kube-api-access-cklsh\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669692 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.669699 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.672478 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.673548 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d8e5c82-8a89-4455-9c90-f69a8442822d" (UID: "2d8e5c82-8a89-4455-9c90-f69a8442822d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.675067 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.675490 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f341f64a-418c-4790-a14a-fc9768d6fc82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.686094 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp469\" (UniqueName: \"kubernetes.io/projected/f341f64a-418c-4790-a14a-fc9768d6fc82-kube-api-access-qp469\") pod \"nova-cell1-conductor-0\" (UID: \"f341f64a-418c-4790-a14a-fc9768d6fc82\") " pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.757771 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.771993 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.772042 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8e5c82-8a89-4455-9c90-f69a8442822d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.914741 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.936237 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:09 crc kubenswrapper[5000]: I0105 21:53:09.936309 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.212976 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.314078 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331786 5000 generic.go:334] "Generic (PLEG): container finished" podID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerID="40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" exitCode=0 Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331823 5000 generic.go:334] "Generic (PLEG): container finished" podID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerID="38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" exitCode=143 Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331882 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerDied","Data":"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9"} Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331909 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331929 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerDied","Data":"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d"} Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331945 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6","Type":"ContainerDied","Data":"cea8a8c841024f4bc68c5ee9bde0c774d449fd0b9fcac05d41666d60700c1023"} Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.331964 5000 scope.go:117] "RemoveContainer" containerID="40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.339436 5000 generic.go:334] "Generic (PLEG): container finished" podID="befe496a-c80d-4c13-b084-38073098dbb3" containerID="b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035" exitCode=143 Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.339577 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerDied","Data":"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035"} Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.342557 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.342675 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-h8ftr" event={"ID":"2d8e5c82-8a89-4455-9c90-f69a8442822d","Type":"ContainerDied","Data":"f43e008cf5ab9b82f3433f0a9ec3b50c2bfb7a049b659b810700bdbaf9aabecd"} Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.385163 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs\") pod \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.385221 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle\") pod \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.385400 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs\") pod \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.385432 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data\") pod \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.385478 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6gs5\" (UniqueName: \"kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5\") pod \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\" (UID: \"6a176fd9-427f-4b3a-a87e-4bbc1f4465f6\") " Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.389066 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5" (OuterVolumeSpecName: "kube-api-access-n6gs5") pod "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" (UID: "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6"). InnerVolumeSpecName "kube-api-access-n6gs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.389332 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs" (OuterVolumeSpecName: "logs") pod "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" (UID: "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.391680 5000 scope.go:117] "RemoveContainer" containerID="38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.391696 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.411095 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-h8ftr"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.421018 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data" (OuterVolumeSpecName: "config-data") pod "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" (UID: "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.423882 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" (UID: "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.430303 5000 scope.go:117] "RemoveContainer" containerID="40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.430752 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9\": container with ID starting with 40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9 not found: ID does not exist" containerID="40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.430787 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9"} err="failed to get container status \"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9\": rpc error: code = NotFound desc = could not find container \"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9\": container with ID starting with 40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9 not found: ID does not exist" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.430817 5000 scope.go:117] "RemoveContainer" containerID="38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.431265 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d\": container with ID starting with 38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d not found: ID does not exist" containerID="38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.431287 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d"} err="failed to get container status \"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d\": rpc error: code = NotFound desc = could not find container \"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d\": container with ID starting with 38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d not found: ID does not exist" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.431300 5000 scope.go:117] "RemoveContainer" containerID="40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.431621 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9"} err="failed to get container status \"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9\": rpc error: code = NotFound desc = could not find container \"40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9\": container with ID starting with 40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9 not found: ID does not exist" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.431662 5000 scope.go:117] "RemoveContainer" containerID="38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.431978 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d"} err="failed to get container status \"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d\": rpc error: code = NotFound desc = could not find container \"38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d\": container with ID starting with 38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d not found: ID does not exist" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.432005 5000 scope.go:117] "RemoveContainer" containerID="356fe08f62620158b7489eb2ff279c54719bb52b2d93daab2de4106c7342bf92" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.460596 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" (UID: "6a176fd9-427f-4b3a-a87e-4bbc1f4465f6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.468282 5000 scope.go:117] "RemoveContainer" containerID="7dc5ff7af7e132dd185bb34b05348536057cbb17dde36cb085b1b814f4cfa93a" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.488253 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6gs5\" (UniqueName: \"kubernetes.io/projected/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-kube-api-access-n6gs5\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.488286 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.488299 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.488310 5000 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.488319 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.685305 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.694778 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.715664 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.716032 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="dnsmasq-dns" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716048 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="dnsmasq-dns" Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.716073 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-log" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716080 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-log" Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.716099 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-metadata" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716106 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-metadata" Jan 05 21:53:10 crc kubenswrapper[5000]: E0105 21:53:10.716134 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="init" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716146 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="init" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716310 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" containerName="dnsmasq-dns" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716326 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-log" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.716334 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" containerName="nova-metadata-metadata" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.717212 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.722254 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.722464 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.729124 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.895453 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.895535 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6xl2\" (UniqueName: \"kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.895663 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.895705 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.895721 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997488 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997558 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997584 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997637 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997697 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6xl2\" (UniqueName: \"kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:10 crc kubenswrapper[5000]: I0105 21:53:10.997978 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.001509 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.004490 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.008539 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.016916 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6xl2\" (UniqueName: \"kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2\") pod \"nova-metadata-0\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.042473 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.344296 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8e5c82-8a89-4455-9c90-f69a8442822d" path="/var/lib/kubelet/pods/2d8e5c82-8a89-4455-9c90-f69a8442822d/volumes" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.346117 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a176fd9-427f-4b3a-a87e-4bbc1f4465f6" path="/var/lib/kubelet/pods/6a176fd9-427f-4b3a-a87e-4bbc1f4465f6/volumes" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.369596 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f341f64a-418c-4790-a14a-fc9768d6fc82","Type":"ContainerStarted","Data":"4726337d2d57577f1087eb6356b4e2c2b73ce0ac9c9db4bf8dfdc5c7d7bbf4bc"} Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.369930 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f341f64a-418c-4790-a14a-fc9768d6fc82","Type":"ContainerStarted","Data":"e4d9851302276cbdd1700bf251389f6520b25c7ad86cba57eba4d3cbe88d68f7"} Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.370199 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.371775 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerName="nova-scheduler-scheduler" containerID="cri-o://29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" gracePeriod=30 Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.387033 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.387010247 podStartE2EDuration="2.387010247s" podCreationTimestamp="2026-01-05 21:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:11.384266089 +0000 UTC m=+1146.340468568" watchObservedRunningTime="2026-01-05 21:53:11.387010247 +0000 UTC m=+1146.343212716" Jan 05 21:53:11 crc kubenswrapper[5000]: I0105 21:53:11.534674 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:12 crc kubenswrapper[5000]: I0105 21:53:12.387340 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerStarted","Data":"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76"} Jan 05 21:53:12 crc kubenswrapper[5000]: I0105 21:53:12.388647 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerStarted","Data":"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f"} Jan 05 21:53:12 crc kubenswrapper[5000]: I0105 21:53:12.388759 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerStarted","Data":"422c5029a500c9bcf5ba39c431f78348237bc3de33c2229247c4696733e7c209"} Jan 05 21:53:12 crc kubenswrapper[5000]: I0105 21:53:12.421147 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.421103246 podStartE2EDuration="2.421103246s" podCreationTimestamp="2026-01-05 21:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:12.413732306 +0000 UTC m=+1147.369934775" watchObservedRunningTime="2026-01-05 21:53:12.421103246 +0000 UTC m=+1147.377305715" Jan 05 21:53:13 crc kubenswrapper[5000]: E0105 21:53:13.811026 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:13 crc kubenswrapper[5000]: E0105 21:53:13.812937 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:13 crc kubenswrapper[5000]: E0105 21:53:13.814755 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:13 crc kubenswrapper[5000]: E0105 21:53:13.814794 5000 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerName="nova-scheduler-scheduler" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.324515 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.413914 5000 generic.go:334] "Generic (PLEG): container finished" podID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" exitCode=0 Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.413952 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.414016 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47","Type":"ContainerDied","Data":"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4"} Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.414088 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47","Type":"ContainerDied","Data":"353b690f8adf6a3faea2e19bd58b07ee16a13eedfd77e1b20b9c97c3c8103be1"} Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.414119 5000 scope.go:117] "RemoveContainer" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.433853 5000 scope.go:117] "RemoveContainer" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" Jan 05 21:53:14 crc kubenswrapper[5000]: E0105 21:53:14.434309 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4\": container with ID starting with 29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4 not found: ID does not exist" containerID="29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.434350 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4"} err="failed to get container status \"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4\": rpc error: code = NotFound desc = could not find container \"29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4\": container with ID starting with 29bdc9b92fe043dc5b0283b60efa6dda30681670b45bebd4f6e66e1b230111f4 not found: ID does not exist" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.465171 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data\") pod \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.465747 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqxnt\" (UniqueName: \"kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt\") pod \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.465938 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle\") pod \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\" (UID: \"6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47\") " Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.472096 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt" (OuterVolumeSpecName: "kube-api-access-xqxnt") pod "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" (UID: "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47"). InnerVolumeSpecName "kube-api-access-xqxnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.497193 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" (UID: "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.499063 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data" (OuterVolumeSpecName: "config-data") pod "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" (UID: "6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.570257 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.570431 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqxnt\" (UniqueName: \"kubernetes.io/projected/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-kube-api-access-xqxnt\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.570464 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.761381 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.783142 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.796164 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:14 crc kubenswrapper[5000]: E0105 21:53:14.796759 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerName="nova-scheduler-scheduler" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.796782 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerName="nova-scheduler-scheduler" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.797121 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" containerName="nova-scheduler-scheduler" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.797959 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.801366 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.805813 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.976558 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdvt\" (UniqueName: \"kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.976623 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:14 crc kubenswrapper[5000]: I0105 21:53:14.976764 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.078482 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.078592 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdvt\" (UniqueName: \"kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.078614 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.083718 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.084410 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.096352 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdvt\" (UniqueName: \"kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt\") pod \"nova-scheduler-0\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.120445 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.263158 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.339464 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47" path="/var/lib/kubelet/pods/6c6f3b8f-72ba-42f4-bf4b-be7cb41c8a47/volumes" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.387730 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data\") pod \"befe496a-c80d-4c13-b084-38073098dbb3\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.387847 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle\") pod \"befe496a-c80d-4c13-b084-38073098dbb3\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.387969 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs\") pod \"befe496a-c80d-4c13-b084-38073098dbb3\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.387989 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2tkv\" (UniqueName: \"kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv\") pod \"befe496a-c80d-4c13-b084-38073098dbb3\" (UID: \"befe496a-c80d-4c13-b084-38073098dbb3\") " Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.388631 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs" (OuterVolumeSpecName: "logs") pod "befe496a-c80d-4c13-b084-38073098dbb3" (UID: "befe496a-c80d-4c13-b084-38073098dbb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.392139 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv" (OuterVolumeSpecName: "kube-api-access-c2tkv") pod "befe496a-c80d-4c13-b084-38073098dbb3" (UID: "befe496a-c80d-4c13-b084-38073098dbb3"). InnerVolumeSpecName "kube-api-access-c2tkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.412914 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befe496a-c80d-4c13-b084-38073098dbb3" (UID: "befe496a-c80d-4c13-b084-38073098dbb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.413247 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data" (OuterVolumeSpecName: "config-data") pod "befe496a-c80d-4c13-b084-38073098dbb3" (UID: "befe496a-c80d-4c13-b084-38073098dbb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.427316 5000 generic.go:334] "Generic (PLEG): container finished" podID="befe496a-c80d-4c13-b084-38073098dbb3" containerID="e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14" exitCode=0 Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.427386 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.427385 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerDied","Data":"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14"} Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.428083 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"befe496a-c80d-4c13-b084-38073098dbb3","Type":"ContainerDied","Data":"e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34"} Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.428105 5000 scope.go:117] "RemoveContainer" containerID="e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.462661 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.466270 5000 scope.go:117] "RemoveContainer" containerID="b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.475545 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.485195 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:15 crc kubenswrapper[5000]: E0105 21:53:15.485754 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-log" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.485775 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-log" Jan 05 21:53:15 crc kubenswrapper[5000]: E0105 21:53:15.485788 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-api" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.485796 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-api" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.486015 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-api" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.486044 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="befe496a-c80d-4c13-b084-38073098dbb3" containerName="nova-api-log" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.486136 5000 scope.go:117] "RemoveContainer" containerID="e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14" Jan 05 21:53:15 crc kubenswrapper[5000]: E0105 21:53:15.486595 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14\": container with ID starting with e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14 not found: ID does not exist" containerID="e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.486631 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14"} err="failed to get container status \"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14\": rpc error: code = NotFound desc = could not find container \"e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14\": container with ID starting with e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14 not found: ID does not exist" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.486656 5000 scope.go:117] "RemoveContainer" containerID="b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.487055 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: E0105 21:53:15.487231 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035\": container with ID starting with b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035 not found: ID does not exist" containerID="b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.487337 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035"} err="failed to get container status \"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035\": rpc error: code = NotFound desc = could not find container \"b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035\": container with ID starting with b7e890898b40436ee2bfbc3c28bc8fd2d5fa881fb2dafdc9e5f8aaa5abf6f035 not found: ID does not exist" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.490955 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.491774 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.491806 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2tkv\" (UniqueName: \"kubernetes.io/projected/befe496a-c80d-4c13-b084-38073098dbb3-kube-api-access-c2tkv\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.491818 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befe496a-c80d-4c13-b084-38073098dbb3-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.491828 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befe496a-c80d-4c13-b084-38073098dbb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.493994 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.579231 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.593071 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.593184 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmcb\" (UniqueName: \"kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.593274 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.593317 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.695908 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pmcb\" (UniqueName: \"kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.695998 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.696024 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.696081 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.696642 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.700519 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.703124 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.714650 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pmcb\" (UniqueName: \"kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb\") pod \"nova-api-0\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " pod="openstack/nova-api-0" Jan 05 21:53:15 crc kubenswrapper[5000]: I0105 21:53:15.809006 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.042982 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.043260 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.221984 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.446524 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerStarted","Data":"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024"} Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.446831 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerStarted","Data":"ce74111706d96ecb8a96712159030e7898bfcc46a64b229d383046ad4f86ac00"} Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.448419 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50","Type":"ContainerStarted","Data":"1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4"} Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.448460 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50","Type":"ContainerStarted","Data":"74701263492ffaac383a5d59871266eba243b1e775500fad304e714d556d1637"} Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.476050 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.476026779 podStartE2EDuration="2.476026779s" podCreationTimestamp="2026-01-05 21:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:16.462808763 +0000 UTC m=+1151.419011232" watchObservedRunningTime="2026-01-05 21:53:16.476026779 +0000 UTC m=+1151.432229248" Jan 05 21:53:16 crc kubenswrapper[5000]: I0105 21:53:16.680560 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 21:53:17 crc kubenswrapper[5000]: I0105 21:53:17.333987 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befe496a-c80d-4c13-b084-38073098dbb3" path="/var/lib/kubelet/pods/befe496a-c80d-4c13-b084-38073098dbb3/volumes" Jan 05 21:53:17 crc kubenswrapper[5000]: I0105 21:53:17.464698 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerStarted","Data":"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc"} Jan 05 21:53:17 crc kubenswrapper[5000]: I0105 21:53:17.486769 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.486746401 podStartE2EDuration="2.486746401s" podCreationTimestamp="2026-01-05 21:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:17.484555449 +0000 UTC m=+1152.440757928" watchObservedRunningTime="2026-01-05 21:53:17.486746401 +0000 UTC m=+1152.442948870" Jan 05 21:53:19 crc kubenswrapper[5000]: I0105 21:53:19.796879 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.121290 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.193259 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.193455 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0be433e4-7178-4637-922d-9d1d455b7f76" containerName="kube-state-metrics" containerID="cri-o://c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5" gracePeriod=30 Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.497987 5000 generic.go:334] "Generic (PLEG): container finished" podID="0be433e4-7178-4637-922d-9d1d455b7f76" containerID="c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5" exitCode=2 Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.498166 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be433e4-7178-4637-922d-9d1d455b7f76","Type":"ContainerDied","Data":"c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5"} Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.684609 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.789651 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzkng\" (UniqueName: \"kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng\") pod \"0be433e4-7178-4637-922d-9d1d455b7f76\" (UID: \"0be433e4-7178-4637-922d-9d1d455b7f76\") " Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.797234 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng" (OuterVolumeSpecName: "kube-api-access-zzkng") pod "0be433e4-7178-4637-922d-9d1d455b7f76" (UID: "0be433e4-7178-4637-922d-9d1d455b7f76"). InnerVolumeSpecName "kube-api-access-zzkng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:20 crc kubenswrapper[5000]: I0105 21:53:20.892408 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzkng\" (UniqueName: \"kubernetes.io/projected/0be433e4-7178-4637-922d-9d1d455b7f76-kube-api-access-zzkng\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.042918 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.043005 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.511345 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0be433e4-7178-4637-922d-9d1d455b7f76","Type":"ContainerDied","Data":"bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01"} Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.511663 5000 scope.go:117] "RemoveContainer" containerID="c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.511816 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.535944 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.548251 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.560412 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:21 crc kubenswrapper[5000]: E0105 21:53:21.560803 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be433e4-7178-4637-922d-9d1d455b7f76" containerName="kube-state-metrics" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.560818 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be433e4-7178-4637-922d-9d1d455b7f76" containerName="kube-state-metrics" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.561020 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be433e4-7178-4637-922d-9d1d455b7f76" containerName="kube-state-metrics" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.561608 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.564253 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.570708 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.579021 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.711352 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.711399 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.711439 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnq7c\" (UniqueName: \"kubernetes.io/projected/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-api-access-fnq7c\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.711493 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.729608 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.729945 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-central-agent" containerID="cri-o://197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af" gracePeriod=30 Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.730084 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-notification-agent" containerID="cri-o://c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835" gracePeriod=30 Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.730109 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="sg-core" containerID="cri-o://737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85" gracePeriod=30 Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.730109 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="proxy-httpd" containerID="cri-o://7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae" gracePeriod=30 Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.812930 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.812982 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.813019 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnq7c\" (UniqueName: \"kubernetes.io/projected/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-api-access-fnq7c\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.813071 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.818863 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.820064 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.826617 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.836332 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnq7c\" (UniqueName: \"kubernetes.io/projected/1cb8a9e8-897c-4005-9ba7-555eeba1b6c1-kube-api-access-fnq7c\") pod \"kube-state-metrics-0\" (UID: \"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1\") " pod="openstack/kube-state-metrics-0" Jan 05 21:53:21 crc kubenswrapper[5000]: I0105 21:53:21.894115 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.061305 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.061751 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.427449 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546201 5000 generic.go:334] "Generic (PLEG): container finished" podID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerID="7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae" exitCode=0 Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546243 5000 generic.go:334] "Generic (PLEG): container finished" podID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerID="737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85" exitCode=2 Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546254 5000 generic.go:334] "Generic (PLEG): container finished" podID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerID="197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af" exitCode=0 Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546333 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerDied","Data":"7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae"} Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546369 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerDied","Data":"737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85"} Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.546384 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerDied","Data":"197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af"} Jan 05 21:53:22 crc kubenswrapper[5000]: I0105 21:53:22.554937 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1","Type":"ContainerStarted","Data":"639a86d3dcb8500c0795af8ee105d737c8804a89f85d00f220f6586d8c24c331"} Jan 05 21:53:23 crc kubenswrapper[5000]: I0105 21:53:23.098746 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:53:23 crc kubenswrapper[5000]: I0105 21:53:23.099134 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:53:23 crc kubenswrapper[5000]: I0105 21:53:23.335032 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be433e4-7178-4637-922d-9d1d455b7f76" path="/var/lib/kubelet/pods/0be433e4-7178-4637-922d-9d1d455b7f76/volumes" Jan 05 21:53:23 crc kubenswrapper[5000]: I0105 21:53:23.564148 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1cb8a9e8-897c-4005-9ba7-555eeba1b6c1","Type":"ContainerStarted","Data":"eb7823b917601c09d7fc2103bebc145d9cb3694e3e593170c53d2db9f09a3b20"} Jan 05 21:53:23 crc kubenswrapper[5000]: I0105 21:53:23.564493 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.576595 5000 generic.go:334] "Generic (PLEG): container finished" podID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerID="c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835" exitCode=0 Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.576692 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerDied","Data":"c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835"} Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.577069 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d308eadf-cd5d-4a84-863b-dc64302ebfda","Type":"ContainerDied","Data":"a7f58f0535c675acf296b50b34f03729e9a4eff161d68028c98b2deebc5697cb"} Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.577089 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f58f0535c675acf296b50b34f03729e9a4eff161d68028c98b2deebc5697cb" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.601687 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.621265 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.25933822 podStartE2EDuration="3.621244843s" podCreationTimestamp="2026-01-05 21:53:21 +0000 UTC" firstStartedPulling="2026-01-05 21:53:22.430739361 +0000 UTC m=+1157.386941820" lastFinishedPulling="2026-01-05 21:53:22.792645974 +0000 UTC m=+1157.748848443" observedRunningTime="2026-01-05 21:53:23.59399876 +0000 UTC m=+1158.550201229" watchObservedRunningTime="2026-01-05 21:53:24.621244843 +0000 UTC m=+1159.577447312" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674418 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674510 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674553 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674650 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674672 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674725 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-755qx\" (UniqueName: \"kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.674770 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd\") pod \"d308eadf-cd5d-4a84-863b-dc64302ebfda\" (UID: \"d308eadf-cd5d-4a84-863b-dc64302ebfda\") " Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.675402 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.675953 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.680710 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx" (OuterVolumeSpecName: "kube-api-access-755qx") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "kube-api-access-755qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.684021 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts" (OuterVolumeSpecName: "scripts") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.717847 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.763648 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777541 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777582 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777596 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777607 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777618 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-755qx\" (UniqueName: \"kubernetes.io/projected/d308eadf-cd5d-4a84-863b-dc64302ebfda-kube-api-access-755qx\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.777629 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d308eadf-cd5d-4a84-863b-dc64302ebfda-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.784724 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data" (OuterVolumeSpecName: "config-data") pod "d308eadf-cd5d-4a84-863b-dc64302ebfda" (UID: "d308eadf-cd5d-4a84-863b-dc64302ebfda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:24 crc kubenswrapper[5000]: I0105 21:53:24.879725 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308eadf-cd5d-4a84-863b-dc64302ebfda-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.121651 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.168652 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.585269 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.615075 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.624777 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.627845 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.636551 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:25 crc kubenswrapper[5000]: E0105 21:53:25.661091 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-central-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661141 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-central-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: E0105 21:53:25.661175 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="sg-core" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661181 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="sg-core" Jan 05 21:53:25 crc kubenswrapper[5000]: E0105 21:53:25.661193 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="proxy-httpd" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661199 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="proxy-httpd" Jan 05 21:53:25 crc kubenswrapper[5000]: E0105 21:53:25.661214 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-notification-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661220 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-notification-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661497 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-central-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661514 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="proxy-httpd" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661528 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="ceilometer-notification-agent" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.661540 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" containerName="sg-core" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.663036 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.663192 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.665746 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.677269 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.677506 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803066 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803134 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803262 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803308 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803433 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803458 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4676f\" (UniqueName: \"kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803523 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.803688 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.809443 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.809492 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.905821 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906180 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906288 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906367 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906498 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906574 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4676f\" (UniqueName: \"kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906284 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906865 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.906866 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.907071 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.911199 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.911852 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.912277 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.913671 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.933860 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4676f\" (UniqueName: \"kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:25 crc kubenswrapper[5000]: I0105 21:53:25.934010 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts\") pod \"ceilometer-0\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " pod="openstack/ceilometer-0" Jan 05 21:53:26 crc kubenswrapper[5000]: I0105 21:53:26.010925 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:26 crc kubenswrapper[5000]: I0105 21:53:26.484253 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:26 crc kubenswrapper[5000]: I0105 21:53:26.607532 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerStarted","Data":"eef8c11ef5aa8cd671cb6f9b1fb1e64a1dcc365abc109a689ffad4afcee875cd"} Jan 05 21:53:26 crc kubenswrapper[5000]: I0105 21:53:26.893178 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:53:26 crc kubenswrapper[5000]: I0105 21:53:26.893447 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 05 21:53:27 crc kubenswrapper[5000]: I0105 21:53:27.337498 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d308eadf-cd5d-4a84-863b-dc64302ebfda" path="/var/lib/kubelet/pods/d308eadf-cd5d-4a84-863b-dc64302ebfda/volumes" Jan 05 21:53:27 crc kubenswrapper[5000]: I0105 21:53:27.616301 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerStarted","Data":"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036"} Jan 05 21:53:28 crc kubenswrapper[5000]: I0105 21:53:28.630625 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerStarted","Data":"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3"} Jan 05 21:53:29 crc kubenswrapper[5000]: I0105 21:53:29.642612 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerStarted","Data":"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24"} Jan 05 21:53:30 crc kubenswrapper[5000]: I0105 21:53:30.658123 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerStarted","Data":"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5"} Jan 05 21:53:30 crc kubenswrapper[5000]: I0105 21:53:30.658475 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:53:30 crc kubenswrapper[5000]: I0105 21:53:30.678583 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.615075336 podStartE2EDuration="5.678568105s" podCreationTimestamp="2026-01-05 21:53:25 +0000 UTC" firstStartedPulling="2026-01-05 21:53:26.474688461 +0000 UTC m=+1161.430890940" lastFinishedPulling="2026-01-05 21:53:29.53818124 +0000 UTC m=+1164.494383709" observedRunningTime="2026-01-05 21:53:30.674989783 +0000 UTC m=+1165.631192262" watchObservedRunningTime="2026-01-05 21:53:30.678568105 +0000 UTC m=+1165.634770574" Jan 05 21:53:31 crc kubenswrapper[5000]: I0105 21:53:31.052856 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 21:53:31 crc kubenswrapper[5000]: I0105 21:53:31.053703 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 21:53:31 crc kubenswrapper[5000]: I0105 21:53:31.058663 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 21:53:31 crc kubenswrapper[5000]: I0105 21:53:31.670299 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 21:53:31 crc kubenswrapper[5000]: I0105 21:53:31.904272 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 05 21:53:33 crc kubenswrapper[5000]: W0105 21:53:33.228409 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a176fd9_427f_4b3a_a87e_4bbc1f4465f6.slice/crio-cea8a8c841024f4bc68c5ee9bde0c774d449fd0b9fcac05d41666d60700c1023 WatchSource:0}: Error finding container cea8a8c841024f4bc68c5ee9bde0c774d449fd0b9fcac05d41666d60700c1023: Status 404 returned error can't find the container with id cea8a8c841024f4bc68c5ee9bde0c774d449fd0b9fcac05d41666d60700c1023 Jan 05 21:53:33 crc kubenswrapper[5000]: W0105 21:53:33.233511 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a176fd9_427f_4b3a_a87e_4bbc1f4465f6.slice/crio-38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d.scope WatchSource:0}: Error finding container 38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d: Status 404 returned error can't find the container with id 38db8477bba83a66d09ef3dc7ccd2eec9621aa7fc93e2a43c35edb075f7cc70d Jan 05 21:53:33 crc kubenswrapper[5000]: W0105 21:53:33.233955 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a176fd9_427f_4b3a_a87e_4bbc1f4465f6.slice/crio-40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9.scope WatchSource:0}: Error finding container 40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9: Status 404 returned error can't find the container with id 40708b479319d174e32b2c5825648188b7bf36ffdb41c4d471ba3ee32aa735e9 Jan 05 21:53:33 crc kubenswrapper[5000]: E0105 21:53:33.461098 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefe496a_c80d_4c13_b084_38073098dbb3.slice/crio-conmon-e89ed749d4174fce05f5df80055c5cfe9516ca4ef06f59afa6008af16c9eee14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be433e4_7178_4637_922d_9d1d455b7f76.slice/crio-bb97be9377b2ab4490ad0a2a25e75e4828c4ca854acc66aee4701f67ffd2af01\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-a7f58f0535c675acf296b50b34f03729e9a4eff161d68028c98b2deebc5697cb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4a24a0_0380_498f_87b9_3e3b2e0915d5.slice/crio-c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4a24a0_0380_498f_87b9_3e3b2e0915d5.slice/crio-conmon-c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be433e4_7178_4637_922d_9d1d455b7f76.slice/crio-conmon-c251c03e8bdfa68bb70a2274c85103f634b64278f95d16df6e0672ffb6a217e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be433e4_7178_4637_922d_9d1d455b7f76.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-conmon-c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefe496a_c80d_4c13_b084_38073098dbb3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-conmon-737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-conmon-197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-conmon-7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd308eadf_cd5d_4a84_863b_dc64302ebfda.slice/crio-c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefe496a_c80d_4c13_b084_38073098dbb3.slice/crio-e0522d2b5e4818bd49c810b57fc5e976cb4954b01f5154ee42df0b6330eb8e34\": RecentStats: unable to find data in memory cache]" Jan 05 21:53:33 crc kubenswrapper[5000]: I0105 21:53:33.684938 5000 generic.go:334] "Generic (PLEG): container finished" podID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" containerID="c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7" exitCode=137 Jan 05 21:53:33 crc kubenswrapper[5000]: I0105 21:53:33.685003 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa4a24a0-0380-498f-87b9-3e3b2e0915d5","Type":"ContainerDied","Data":"c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7"} Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.071637 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.189798 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle\") pod \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.189936 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvpq5\" (UniqueName: \"kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5\") pod \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.190783 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data\") pod \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\" (UID: \"fa4a24a0-0380-498f-87b9-3e3b2e0915d5\") " Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.202092 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5" (OuterVolumeSpecName: "kube-api-access-kvpq5") pod "fa4a24a0-0380-498f-87b9-3e3b2e0915d5" (UID: "fa4a24a0-0380-498f-87b9-3e3b2e0915d5"). InnerVolumeSpecName "kube-api-access-kvpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.221137 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data" (OuterVolumeSpecName: "config-data") pod "fa4a24a0-0380-498f-87b9-3e3b2e0915d5" (UID: "fa4a24a0-0380-498f-87b9-3e3b2e0915d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.224808 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa4a24a0-0380-498f-87b9-3e3b2e0915d5" (UID: "fa4a24a0-0380-498f-87b9-3e3b2e0915d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.293684 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.293727 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.293742 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvpq5\" (UniqueName: \"kubernetes.io/projected/fa4a24a0-0380-498f-87b9-3e3b2e0915d5-kube-api-access-kvpq5\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.698133 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa4a24a0-0380-498f-87b9-3e3b2e0915d5","Type":"ContainerDied","Data":"6e3448c1e7bfcfc7ae79c3f4b9f6aca9ca4e4385b5accdf8aa2b5f4bf6bb0302"} Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.698201 5000 scope.go:117] "RemoveContainer" containerID="c4463eaa0729a4262c96e0a0da6cef3ed02c0e2681518cef57053aa02fdbcce7" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.698497 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.748566 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.763154 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.836127 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:34 crc kubenswrapper[5000]: E0105 21:53:34.836786 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.836927 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.837184 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" containerName="nova-cell1-novncproxy-novncproxy" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.837998 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.840837 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.841004 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.841119 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.845195 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.935189 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.935231 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.935322 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.935369 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:34 crc kubenswrapper[5000]: I0105 21:53:34.935492 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbbq\" (UniqueName: \"kubernetes.io/projected/aa822db9-b962-42dd-a6c8-3774d9c6d477-kube-api-access-mhbbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.036801 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbbq\" (UniqueName: \"kubernetes.io/projected/aa822db9-b962-42dd-a6c8-3774d9c6d477-kube-api-access-mhbbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.036941 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.036967 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.037006 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.037033 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.043530 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.045002 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.046635 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.054815 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa822db9-b962-42dd-a6c8-3774d9c6d477-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.056390 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbbq\" (UniqueName: \"kubernetes.io/projected/aa822db9-b962-42dd-a6c8-3774d9c6d477-kube-api-access-mhbbq\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa822db9-b962-42dd-a6c8-3774d9c6d477\") " pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.153770 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.334770 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa4a24a0-0380-498f-87b9-3e3b2e0915d5" path="/var/lib/kubelet/pods/fa4a24a0-0380-498f-87b9-3e3b2e0915d5/volumes" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.596318 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 05 21:53:35 crc kubenswrapper[5000]: W0105 21:53:35.596481 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa822db9_b962_42dd_a6c8_3774d9c6d477.slice/crio-7506258da09213cfb5bedadfaa4e988c1026b367ab789b71c200deb98d51669c WatchSource:0}: Error finding container 7506258da09213cfb5bedadfaa4e988c1026b367ab789b71c200deb98d51669c: Status 404 returned error can't find the container with id 7506258da09213cfb5bedadfaa4e988c1026b367ab789b71c200deb98d51669c Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.712927 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa822db9-b962-42dd-a6c8-3774d9c6d477","Type":"ContainerStarted","Data":"7506258da09213cfb5bedadfaa4e988c1026b367ab789b71c200deb98d51669c"} Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.814412 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.814947 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.816046 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 21:53:35 crc kubenswrapper[5000]: I0105 21:53:35.818161 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.727497 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa822db9-b962-42dd-a6c8-3774d9c6d477","Type":"ContainerStarted","Data":"7d9dd823b0cef0db4ca9b94c00aa6f2848f71c0aa6dd3ed74737079562acdb7a"} Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.727942 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.732398 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.751243 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.751228862 podStartE2EDuration="2.751228862s" podCreationTimestamp="2026-01-05 21:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:36.745369715 +0000 UTC m=+1171.701572204" watchObservedRunningTime="2026-01-05 21:53:36.751228862 +0000 UTC m=+1171.707431331" Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.912573 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.914264 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:36 crc kubenswrapper[5000]: I0105 21:53:36.950998 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078513 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zg5\" (UniqueName: \"kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078607 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078668 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078744 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078761 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.078797 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181092 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181188 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181211 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181246 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181269 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zg5\" (UniqueName: \"kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.181312 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.182096 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.182254 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.182277 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.182550 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.191389 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.200611 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zg5\" (UniqueName: \"kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5\") pod \"dnsmasq-dns-89c5cd4d5-7csvm\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.287727 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:37 crc kubenswrapper[5000]: I0105 21:53:37.781311 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:53:38 crc kubenswrapper[5000]: I0105 21:53:38.744626 5000 generic.go:334] "Generic (PLEG): container finished" podID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerID="5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46" exitCode=0 Jan 05 21:53:38 crc kubenswrapper[5000]: I0105 21:53:38.744719 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" event={"ID":"eb9f5c4b-b0d7-42d7-bf63-06701667697b","Type":"ContainerDied","Data":"5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46"} Jan 05 21:53:38 crc kubenswrapper[5000]: I0105 21:53:38.745216 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" event={"ID":"eb9f5c4b-b0d7-42d7-bf63-06701667697b","Type":"ContainerStarted","Data":"a25894c4806e6e629fb46f41bddea0a476db29e79003cf46b1701710c7ab410c"} Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.016412 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.016759 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-central-agent" containerID="cri-o://fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.016926 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="proxy-httpd" containerID="cri-o://969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.016979 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="sg-core" containerID="cri-o://071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.017018 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-notification-agent" containerID="cri-o://cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.023073 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.200:3000/\": EOF" Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.158705 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759674 5000 generic.go:334] "Generic (PLEG): container finished" podID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerID="969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5" exitCode=0 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759705 5000 generic.go:334] "Generic (PLEG): container finished" podID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerID="071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24" exitCode=2 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759714 5000 generic.go:334] "Generic (PLEG): container finished" podID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerID="fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036" exitCode=0 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759747 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerDied","Data":"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5"} Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759789 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerDied","Data":"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24"} Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.759802 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerDied","Data":"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036"} Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.768941 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-log" containerID="cri-o://2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.770277 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" event={"ID":"eb9f5c4b-b0d7-42d7-bf63-06701667697b","Type":"ContainerStarted","Data":"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34"} Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.770603 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-api" containerID="cri-o://39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc" gracePeriod=30 Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.770685 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:39 crc kubenswrapper[5000]: I0105 21:53:39.796859 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" podStartSLOduration=3.796843415 podStartE2EDuration="3.796843415s" podCreationTimestamp="2026-01-05 21:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:39.796391872 +0000 UTC m=+1174.752594351" watchObservedRunningTime="2026-01-05 21:53:39.796843415 +0000 UTC m=+1174.753045884" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.154021 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.199693 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.268057 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271020 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271066 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271152 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271271 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271324 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4676f\" (UniqueName: \"kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271450 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271533 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts\") pod \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\" (UID: \"00ff61ed-5d70-4346-9df0-18cd69b0c11a\") " Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.271968 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.281518 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.282731 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.294702 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts" (OuterVolumeSpecName: "scripts") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.311101 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f" (OuterVolumeSpecName: "kube-api-access-4676f") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "kube-api-access-4676f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.358021 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.370096 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.383128 5000 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.383157 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.383166 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4676f\" (UniqueName: \"kubernetes.io/projected/00ff61ed-5d70-4346-9df0-18cd69b0c11a-kube-api-access-4676f\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.383176 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00ff61ed-5d70-4346-9df0-18cd69b0c11a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.383183 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.480112 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.485094 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.500794 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data" (OuterVolumeSpecName: "config-data") pod "00ff61ed-5d70-4346-9df0-18cd69b0c11a" (UID: "00ff61ed-5d70-4346-9df0-18cd69b0c11a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.587660 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ff61ed-5d70-4346-9df0-18cd69b0c11a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.780692 5000 generic.go:334] "Generic (PLEG): container finished" podID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerID="cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3" exitCode=0 Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.780762 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.780789 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerDied","Data":"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3"} Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.780826 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00ff61ed-5d70-4346-9df0-18cd69b0c11a","Type":"ContainerDied","Data":"eef8c11ef5aa8cd671cb6f9b1fb1e64a1dcc365abc109a689ffad4afcee875cd"} Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.780848 5000 scope.go:117] "RemoveContainer" containerID="969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.783468 5000 generic.go:334] "Generic (PLEG): container finished" podID="c33ec666-d825-48a7-a50e-7968c287e884" containerID="2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024" exitCode=143 Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.783599 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerDied","Data":"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024"} Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.815099 5000 scope.go:117] "RemoveContainer" containerID="071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.826596 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.837874 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.842229 5000 scope.go:117] "RemoveContainer" containerID="cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848037 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.848531 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="proxy-httpd" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848547 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="proxy-httpd" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.848571 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-central-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848577 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-central-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.848586 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="sg-core" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848595 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="sg-core" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.848629 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-notification-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848636 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-notification-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848836 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="sg-core" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848851 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="proxy-httpd" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848867 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-notification-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.848908 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" containerName="ceilometer-central-agent" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.850770 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.854767 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.855029 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.855168 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.883956 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.895347 5000 scope.go:117] "RemoveContainer" containerID="fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.895851 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.895909 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896011 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896111 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztgq\" (UniqueName: \"kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896151 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896174 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896347 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.896420 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.916532 5000 scope.go:117] "RemoveContainer" containerID="969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.917005 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5\": container with ID starting with 969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5 not found: ID does not exist" containerID="969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917042 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5"} err="failed to get container status \"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5\": rpc error: code = NotFound desc = could not find container \"969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5\": container with ID starting with 969de1746c896fe89540f9bf689231cc743fecf1b210c9bb4476d512f96e38c5 not found: ID does not exist" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917071 5000 scope.go:117] "RemoveContainer" containerID="071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.917454 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24\": container with ID starting with 071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24 not found: ID does not exist" containerID="071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917478 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24"} err="failed to get container status \"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24\": rpc error: code = NotFound desc = could not find container \"071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24\": container with ID starting with 071456611bf6a61986aa7f00128fa6534cafeb0a04496f42defa8204b63e6f24 not found: ID does not exist" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917497 5000 scope.go:117] "RemoveContainer" containerID="cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.917777 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3\": container with ID starting with cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3 not found: ID does not exist" containerID="cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917802 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3"} err="failed to get container status \"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3\": rpc error: code = NotFound desc = could not find container \"cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3\": container with ID starting with cab1b41094ba2ae7964c65dc577180863e79c4edce54f073da42a6170488b9c3 not found: ID does not exist" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.917820 5000 scope.go:117] "RemoveContainer" containerID="fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036" Jan 05 21:53:40 crc kubenswrapper[5000]: E0105 21:53:40.918131 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036\": container with ID starting with fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036 not found: ID does not exist" containerID="fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.918159 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036"} err="failed to get container status \"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036\": rpc error: code = NotFound desc = could not find container \"fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036\": container with ID starting with fbd79111645e72149af918e450b088a2d350984336342f38a5437fa054bb8036 not found: ID does not exist" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997389 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997428 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997472 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997523 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztgq\" (UniqueName: \"kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997543 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997560 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997585 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.997609 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.998212 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:40 crc kubenswrapper[5000]: I0105 21:53:40.998551 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.002412 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.002509 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.002942 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.004224 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.005626 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.014097 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztgq\" (UniqueName: \"kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq\") pod \"ceilometer-0\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.190528 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.198211 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.334083 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ff61ed-5d70-4346-9df0-18cd69b0c11a" path="/var/lib/kubelet/pods/00ff61ed-5d70-4346-9df0-18cd69b0c11a/volumes" Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.694681 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:41 crc kubenswrapper[5000]: I0105 21:53:41.792266 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerStarted","Data":"c7d239936d32bac274c2f2b9f277faab56ed912ae64b810f7602c4bb51e567ce"} Jan 05 21:53:42 crc kubenswrapper[5000]: I0105 21:53:42.804595 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerStarted","Data":"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d"} Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.310864 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.440781 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pmcb\" (UniqueName: \"kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb\") pod \"c33ec666-d825-48a7-a50e-7968c287e884\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.440858 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle\") pod \"c33ec666-d825-48a7-a50e-7968c287e884\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.440989 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data\") pod \"c33ec666-d825-48a7-a50e-7968c287e884\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.441016 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs\") pod \"c33ec666-d825-48a7-a50e-7968c287e884\" (UID: \"c33ec666-d825-48a7-a50e-7968c287e884\") " Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.441476 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs" (OuterVolumeSpecName: "logs") pod "c33ec666-d825-48a7-a50e-7968c287e884" (UID: "c33ec666-d825-48a7-a50e-7968c287e884"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.443472 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c33ec666-d825-48a7-a50e-7968c287e884-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.445130 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb" (OuterVolumeSpecName: "kube-api-access-4pmcb") pod "c33ec666-d825-48a7-a50e-7968c287e884" (UID: "c33ec666-d825-48a7-a50e-7968c287e884"). InnerVolumeSpecName "kube-api-access-4pmcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.476460 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c33ec666-d825-48a7-a50e-7968c287e884" (UID: "c33ec666-d825-48a7-a50e-7968c287e884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.489649 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data" (OuterVolumeSpecName: "config-data") pod "c33ec666-d825-48a7-a50e-7968c287e884" (UID: "c33ec666-d825-48a7-a50e-7968c287e884"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.545323 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.545376 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pmcb\" (UniqueName: \"kubernetes.io/projected/c33ec666-d825-48a7-a50e-7968c287e884-kube-api-access-4pmcb\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.545393 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33ec666-d825-48a7-a50e-7968c287e884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.816784 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerStarted","Data":"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377"} Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.817169 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerStarted","Data":"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819"} Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.819449 5000 generic.go:334] "Generic (PLEG): container finished" podID="c33ec666-d825-48a7-a50e-7968c287e884" containerID="39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc" exitCode=0 Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.819489 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerDied","Data":"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc"} Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.819637 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c33ec666-d825-48a7-a50e-7968c287e884","Type":"ContainerDied","Data":"ce74111706d96ecb8a96712159030e7898bfcc46a64b229d383046ad4f86ac00"} Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.819697 5000 scope.go:117] "RemoveContainer" containerID="39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.819522 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.841240 5000 scope.go:117] "RemoveContainer" containerID="2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.864832 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.879402 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.890178 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:43 crc kubenswrapper[5000]: E0105 21:53:43.890560 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-log" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.890581 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-log" Jan 05 21:53:43 crc kubenswrapper[5000]: E0105 21:53:43.890621 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-api" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.890628 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-api" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.890782 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-api" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.890801 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33ec666-d825-48a7-a50e-7968c287e884" containerName="nova-api-log" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.891957 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.892726 5000 scope.go:117] "RemoveContainer" containerID="39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.893623 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 21:53:43 crc kubenswrapper[5000]: E0105 21:53:43.893771 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc\": container with ID starting with 39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc not found: ID does not exist" containerID="39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.893800 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc"} err="failed to get container status \"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc\": rpc error: code = NotFound desc = could not find container \"39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc\": container with ID starting with 39af0121e25cc677f07e8158f7a45e757703b51d198a255d7020a4deb0c977cc not found: ID does not exist" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.893826 5000 scope.go:117] "RemoveContainer" containerID="2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.903410 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:43 crc kubenswrapper[5000]: E0105 21:53:43.904017 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024\": container with ID starting with 2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024 not found: ID does not exist" containerID="2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.904066 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024"} err="failed to get container status \"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024\": rpc error: code = NotFound desc = could not find container \"2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024\": container with ID starting with 2769b0b2ee4f14ecb1d0c79e4b5960de3eb2d5e5b54a6ad2b7f0d86cedb5e024 not found: ID does not exist" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.904205 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 21:53:43 crc kubenswrapper[5000]: I0105 21:53:43.904305 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.052945 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.053001 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.053037 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.053112 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2pn\" (UniqueName: \"kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.053210 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.053264 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154629 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154716 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154770 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154840 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154881 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.154955 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2pn\" (UniqueName: \"kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.157103 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.160483 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.161143 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.161146 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.169865 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.177366 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2pn\" (UniqueName: \"kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn\") pod \"nova-api-0\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.218564 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.645059 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:44 crc kubenswrapper[5000]: W0105 21:53:44.648214 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454abdfe_e004_43d8_83fd_b3caae8f9354.slice/crio-852cbd1ec148db676555bb7509e6b130588427cc189ed39b0c7f5c4f266da350 WatchSource:0}: Error finding container 852cbd1ec148db676555bb7509e6b130588427cc189ed39b0c7f5c4f266da350: Status 404 returned error can't find the container with id 852cbd1ec148db676555bb7509e6b130588427cc189ed39b0c7f5c4f266da350 Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.832035 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerStarted","Data":"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667"} Jan 05 21:53:44 crc kubenswrapper[5000]: I0105 21:53:44.832082 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerStarted","Data":"852cbd1ec148db676555bb7509e6b130588427cc189ed39b0c7f5c4f266da350"} Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.155055 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.170732 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.337296 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33ec666-d825-48a7-a50e-7968c287e884" path="/var/lib/kubelet/pods/c33ec666-d825-48a7-a50e-7968c287e884/volumes" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.842157 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerStarted","Data":"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f"} Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847205 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerStarted","Data":"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d"} Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847276 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-central-agent" containerID="cri-o://a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d" gracePeriod=30 Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847342 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-notification-agent" containerID="cri-o://829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819" gracePeriod=30 Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847360 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847350 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="sg-core" containerID="cri-o://b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377" gracePeriod=30 Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.847488 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="proxy-httpd" containerID="cri-o://a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d" gracePeriod=30 Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.867649 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.86763433 podStartE2EDuration="2.86763433s" podCreationTimestamp="2026-01-05 21:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:45.863809741 +0000 UTC m=+1180.820012220" watchObservedRunningTime="2026-01-05 21:53:45.86763433 +0000 UTC m=+1180.823836799" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.875988 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 05 21:53:45 crc kubenswrapper[5000]: I0105 21:53:45.892457 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6800161 podStartE2EDuration="5.892437546s" podCreationTimestamp="2026-01-05 21:53:40 +0000 UTC" firstStartedPulling="2026-01-05 21:53:41.701125957 +0000 UTC m=+1176.657328426" lastFinishedPulling="2026-01-05 21:53:44.913547403 +0000 UTC m=+1179.869749872" observedRunningTime="2026-01-05 21:53:45.889509513 +0000 UTC m=+1180.845711992" watchObservedRunningTime="2026-01-05 21:53:45.892437546 +0000 UTC m=+1180.848640035" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.092396 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5hdtf"] Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.094090 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.098240 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.098396 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.107309 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hdtf"] Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.111826 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.111868 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g55m\" (UniqueName: \"kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.111983 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.112124 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.213734 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.213902 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.213936 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g55m\" (UniqueName: \"kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.213974 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.222815 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.223458 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.225459 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.233234 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g55m\" (UniqueName: \"kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m\") pod \"nova-cell1-cell-mapping-5hdtf\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.454392 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.857954 5000 generic.go:334] "Generic (PLEG): container finished" podID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerID="a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d" exitCode=0 Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.858283 5000 generic.go:334] "Generic (PLEG): container finished" podID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerID="b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377" exitCode=2 Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.858292 5000 generic.go:334] "Generic (PLEG): container finished" podID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerID="829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819" exitCode=0 Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.857982 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerDied","Data":"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d"} Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.858517 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerDied","Data":"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377"} Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.858530 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerDied","Data":"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819"} Jan 05 21:53:46 crc kubenswrapper[5000]: W0105 21:53:46.904943 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e147c3d_cd84_4850_8ccc_9bd2c85c848a.slice/crio-597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18 WatchSource:0}: Error finding container 597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18: Status 404 returned error can't find the container with id 597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18 Jan 05 21:53:46 crc kubenswrapper[5000]: I0105 21:53:46.909199 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hdtf"] Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.289811 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.385183 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.385407 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="dnsmasq-dns" containerID="cri-o://af62ebcffaf50ea090758b452cc8f8eb625daf69dbeaddfd5644b8213c377b6f" gracePeriod=10 Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.882430 5000 generic.go:334] "Generic (PLEG): container finished" podID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerID="af62ebcffaf50ea090758b452cc8f8eb625daf69dbeaddfd5644b8213c377b6f" exitCode=0 Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.882781 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" event={"ID":"a46f047d-9a56-424d-a65b-5c9327eaa03d","Type":"ContainerDied","Data":"af62ebcffaf50ea090758b452cc8f8eb625daf69dbeaddfd5644b8213c377b6f"} Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.882813 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" event={"ID":"a46f047d-9a56-424d-a65b-5c9327eaa03d","Type":"ContainerDied","Data":"bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25"} Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.882827 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd0cea4776a78df2fc58b1ae9d8f68c2fb70bdc2c1280668af0645c97693b25" Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.884711 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hdtf" event={"ID":"2e147c3d-cd84-4850-8ccc-9bd2c85c848a","Type":"ContainerStarted","Data":"7132fd9b996a4bceb12335f82b2a026959db3ddb2c21fdf80e307daf2150bf3b"} Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.884733 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hdtf" event={"ID":"2e147c3d-cd84-4850-8ccc-9bd2c85c848a","Type":"ContainerStarted","Data":"597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18"} Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.908765 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.909771 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5hdtf" podStartSLOduration=1.9097515280000001 podStartE2EDuration="1.909751528s" podCreationTimestamp="2026-01-05 21:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:47.901881304 +0000 UTC m=+1182.858083783" watchObservedRunningTime="2026-01-05 21:53:47.909751528 +0000 UTC m=+1182.865953997" Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944480 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944627 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944671 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944696 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944742 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kphqg\" (UniqueName: \"kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.944792 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config\") pod \"a46f047d-9a56-424d-a65b-5c9327eaa03d\" (UID: \"a46f047d-9a56-424d-a65b-5c9327eaa03d\") " Jan 05 21:53:47 crc kubenswrapper[5000]: I0105 21:53:47.964926 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg" (OuterVolumeSpecName: "kube-api-access-kphqg") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "kube-api-access-kphqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.023612 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.026092 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.038934 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config" (OuterVolumeSpecName: "config") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.046395 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.046429 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kphqg\" (UniqueName: \"kubernetes.io/projected/a46f047d-9a56-424d-a65b-5c9327eaa03d-kube-api-access-kphqg\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.046441 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.046453 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.049818 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.053790 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a46f047d-9a56-424d-a65b-5c9327eaa03d" (UID: "a46f047d-9a56-424d-a65b-5c9327eaa03d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.154256 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.154295 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46f047d-9a56-424d-a65b-5c9327eaa03d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.349598 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.357633 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztgq\" (UniqueName: \"kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.357698 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.357737 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.357777 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.357928 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.358410 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.358487 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.358841 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.358871 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data\") pod \"39d20025-7185-4dee-9c30-1771d6fd6ece\" (UID: \"39d20025-7185-4dee-9c30-1771d6fd6ece\") " Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.359251 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.362577 5000 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.362684 5000 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39d20025-7185-4dee-9c30-1771d6fd6ece-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.366964 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts" (OuterVolumeSpecName: "scripts") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.369794 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq" (OuterVolumeSpecName: "kube-api-access-pztgq") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "kube-api-access-pztgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.434001 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.434406 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.442302 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.464274 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.464319 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztgq\" (UniqueName: \"kubernetes.io/projected/39d20025-7185-4dee-9c30-1771d6fd6ece-kube-api-access-pztgq\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.464332 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.464344 5000 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.464355 5000 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.470168 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data" (OuterVolumeSpecName: "config-data") pod "39d20025-7185-4dee-9c30-1771d6fd6ece" (UID: "39d20025-7185-4dee-9c30-1771d6fd6ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.566865 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d20025-7185-4dee-9c30-1771d6fd6ece-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.900015 5000 generic.go:334] "Generic (PLEG): container finished" podID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerID="a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d" exitCode=0 Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.900067 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.900086 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerDied","Data":"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d"} Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.900561 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39d20025-7185-4dee-9c30-1771d6fd6ece","Type":"ContainerDied","Data":"c7d239936d32bac274c2f2b9f277faab56ed912ae64b810f7602c4bb51e567ce"} Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.900598 5000 scope.go:117] "RemoveContainer" containerID="a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.901024 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-lx9p4" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.948827 5000 scope.go:117] "RemoveContainer" containerID="b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.952659 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.971097 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-lx9p4"] Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.973860 5000 scope.go:117] "RemoveContainer" containerID="829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819" Jan 05 21:53:48 crc kubenswrapper[5000]: I0105 21:53:48.979009 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.006818 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.020636 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.021951 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="sg-core" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.022856 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="sg-core" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.022993 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="init" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.023063 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="init" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.023160 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="dnsmasq-dns" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.023227 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="dnsmasq-dns" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.023309 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-central-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.023385 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-central-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.023457 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-notification-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.023526 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-notification-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.023627 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="proxy-httpd" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.034003 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="proxy-httpd" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.028101 5000 scope.go:117] "RemoveContainer" containerID="a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.036063 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" containerName="dnsmasq-dns" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.036095 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-central-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.036122 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="proxy-httpd" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.036137 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="sg-core" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.036154 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" containerName="ceilometer-notification-agent" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.041286 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.041434 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.044574 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.044680 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.047345 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.073057 5000 scope.go:117] "RemoveContainer" containerID="a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.073639 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d\": container with ID starting with a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d not found: ID does not exist" containerID="a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.073694 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d"} err="failed to get container status \"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d\": rpc error: code = NotFound desc = could not find container \"a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d\": container with ID starting with a09ead82bb8e73cde622e2b06bc827f2b60510b3e20b82398ed93989543e7a8d not found: ID does not exist" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.073729 5000 scope.go:117] "RemoveContainer" containerID="b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.074703 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377\": container with ID starting with b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377 not found: ID does not exist" containerID="b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.074751 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377"} err="failed to get container status \"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377\": rpc error: code = NotFound desc = could not find container \"b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377\": container with ID starting with b422be5ac94b541b13d2806e3896692f7f462b3cee27568a8d08a65993a07377 not found: ID does not exist" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.074770 5000 scope.go:117] "RemoveContainer" containerID="829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.075127 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819\": container with ID starting with 829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819 not found: ID does not exist" containerID="829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.075182 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819"} err="failed to get container status \"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819\": rpc error: code = NotFound desc = could not find container \"829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819\": container with ID starting with 829fae2e25a1096168f8643372fcdbb6c0353897efd7c1e68f9274a01b881819 not found: ID does not exist" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.075223 5000 scope.go:117] "RemoveContainer" containerID="a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d" Jan 05 21:53:49 crc kubenswrapper[5000]: E0105 21:53:49.075609 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d\": container with ID starting with a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d not found: ID does not exist" containerID="a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.075639 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d"} err="failed to get container status \"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d\": rpc error: code = NotFound desc = could not find container \"a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d\": container with ID starting with a91fbc2e9d527cd0021b9540d393ed4320e32328ef6e5c78822e1bd315fd044d not found: ID does not exist" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.077727 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.077783 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-log-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.077814 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.077869 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-scripts\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.078095 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-run-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.078123 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-config-data\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.078408 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.078504 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcd2x\" (UniqueName: \"kubernetes.io/projected/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-kube-api-access-fcd2x\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179464 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179515 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcd2x\" (UniqueName: \"kubernetes.io/projected/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-kube-api-access-fcd2x\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179551 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179570 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-log-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179590 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179638 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-scripts\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179670 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-run-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.179706 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-config-data\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.180315 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-run-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.180370 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-log-httpd\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.183433 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.187252 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-config-data\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.190273 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-scripts\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.196712 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.199824 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcd2x\" (UniqueName: \"kubernetes.io/projected/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-kube-api-access-fcd2x\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.205640 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1\") " pod="openstack/ceilometer-0" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.335048 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d20025-7185-4dee-9c30-1771d6fd6ece" path="/var/lib/kubelet/pods/39d20025-7185-4dee-9c30-1771d6fd6ece/volumes" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.335819 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46f047d-9a56-424d-a65b-5c9327eaa03d" path="/var/lib/kubelet/pods/a46f047d-9a56-424d-a65b-5c9327eaa03d/volumes" Jan 05 21:53:49 crc kubenswrapper[5000]: I0105 21:53:49.371275 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 05 21:53:50 crc kubenswrapper[5000]: I0105 21:53:49.940109 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 05 21:53:50 crc kubenswrapper[5000]: I0105 21:53:50.919346 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1","Type":"ContainerStarted","Data":"2f01a67d304093b428d14f85629c6d727d631a6a261369137591f2def3832737"} Jan 05 21:53:50 crc kubenswrapper[5000]: I0105 21:53:50.919969 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1","Type":"ContainerStarted","Data":"296dab262f7e58f3379373c8dcc6ba5d1f177824894e22afc2657637f6065258"} Jan 05 21:53:51 crc kubenswrapper[5000]: I0105 21:53:51.930618 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1","Type":"ContainerStarted","Data":"4207c4f911e536f8dde4caf73e5757d2f91b97ab85ba3428bc0f39e1ce621f58"} Jan 05 21:53:51 crc kubenswrapper[5000]: I0105 21:53:51.932129 5000 generic.go:334] "Generic (PLEG): container finished" podID="2e147c3d-cd84-4850-8ccc-9bd2c85c848a" containerID="7132fd9b996a4bceb12335f82b2a026959db3ddb2c21fdf80e307daf2150bf3b" exitCode=0 Jan 05 21:53:51 crc kubenswrapper[5000]: I0105 21:53:51.932162 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hdtf" event={"ID":"2e147c3d-cd84-4850-8ccc-9bd2c85c848a","Type":"ContainerDied","Data":"7132fd9b996a4bceb12335f82b2a026959db3ddb2c21fdf80e307daf2150bf3b"} Jan 05 21:53:52 crc kubenswrapper[5000]: I0105 21:53:52.944237 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1","Type":"ContainerStarted","Data":"bd903e53a7894b246412467f5c26541d8d522572d2b2ea33fff44697285c098b"} Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.099157 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.099233 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.099285 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.100121 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.100206 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a" gracePeriod=600 Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.374015 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.491824 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g55m\" (UniqueName: \"kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m\") pod \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.491947 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle\") pod \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.492060 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts\") pod \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.492120 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data\") pod \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\" (UID: \"2e147c3d-cd84-4850-8ccc-9bd2c85c848a\") " Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.496600 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m" (OuterVolumeSpecName: "kube-api-access-5g55m") pod "2e147c3d-cd84-4850-8ccc-9bd2c85c848a" (UID: "2e147c3d-cd84-4850-8ccc-9bd2c85c848a"). InnerVolumeSpecName "kube-api-access-5g55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.497024 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts" (OuterVolumeSpecName: "scripts") pod "2e147c3d-cd84-4850-8ccc-9bd2c85c848a" (UID: "2e147c3d-cd84-4850-8ccc-9bd2c85c848a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.521493 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data" (OuterVolumeSpecName: "config-data") pod "2e147c3d-cd84-4850-8ccc-9bd2c85c848a" (UID: "2e147c3d-cd84-4850-8ccc-9bd2c85c848a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.523143 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e147c3d-cd84-4850-8ccc-9bd2c85c848a" (UID: "2e147c3d-cd84-4850-8ccc-9bd2c85c848a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.595262 5000 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-scripts\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.595661 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.595675 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g55m\" (UniqueName: \"kubernetes.io/projected/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-kube-api-access-5g55m\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.595690 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e147c3d-cd84-4850-8ccc-9bd2c85c848a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.954136 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1","Type":"ContainerStarted","Data":"757b38005791d96d8ea368451a1f43a4bbde8768483aced5de855b5f8539c00c"} Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.954268 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.956483 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a" exitCode=0 Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.956511 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a"} Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.956546 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67"} Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.956576 5000 scope.go:117] "RemoveContainer" containerID="fcda7dd4d8fd644f00dbabb101ded861726f4a6f3ef2d7cca2281e23671cc2ef" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.958207 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hdtf" event={"ID":"2e147c3d-cd84-4850-8ccc-9bd2c85c848a","Type":"ContainerDied","Data":"597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18"} Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.958235 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597ae47455a121597e77a36d5adcce98c51eb5c65ecc7c68ac261649326d6d18" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.958266 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hdtf" Jan 05 21:53:53 crc kubenswrapper[5000]: I0105 21:53:53.989928 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.715579249 podStartE2EDuration="5.989909249s" podCreationTimestamp="2026-01-05 21:53:48 +0000 UTC" firstStartedPulling="2026-01-05 21:53:49.947123432 +0000 UTC m=+1184.903325901" lastFinishedPulling="2026-01-05 21:53:53.221453432 +0000 UTC m=+1188.177655901" observedRunningTime="2026-01-05 21:53:53.989870008 +0000 UTC m=+1188.946072497" watchObservedRunningTime="2026-01-05 21:53:53.989909249 +0000 UTC m=+1188.946111718" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.144548 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.144867 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerName="nova-scheduler-scheduler" containerID="cri-o://1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" gracePeriod=30 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.156468 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.156716 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-log" containerID="cri-o://fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" gracePeriod=30 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.156778 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-api" containerID="cri-o://b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" gracePeriod=30 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.191127 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.191504 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" containerID="cri-o://b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76" gracePeriod=30 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.191720 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" containerID="cri-o://5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f" gracePeriod=30 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.750439 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.818359 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.818591 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.818719 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.818853 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.819066 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.819256 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2pn\" (UniqueName: \"kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn\") pod \"454abdfe-e004-43d8-83fd-b3caae8f9354\" (UID: \"454abdfe-e004-43d8-83fd-b3caae8f9354\") " Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.819411 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs" (OuterVolumeSpecName: "logs") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.819856 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/454abdfe-e004-43d8-83fd-b3caae8f9354-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.825682 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn" (OuterVolumeSpecName: "kube-api-access-kf2pn") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "kube-api-access-kf2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.858707 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.861005 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data" (OuterVolumeSpecName: "config-data") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.873235 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.881807 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "454abdfe-e004-43d8-83fd-b3caae8f9354" (UID: "454abdfe-e004-43d8-83fd-b3caae8f9354"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.922190 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.922234 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.922248 5000 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.922259 5000 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/454abdfe-e004-43d8-83fd-b3caae8f9354-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.922273 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2pn\" (UniqueName: \"kubernetes.io/projected/454abdfe-e004-43d8-83fd-b3caae8f9354-kube-api-access-kf2pn\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.969877 5000 generic.go:334] "Generic (PLEG): container finished" podID="4a525a58-3825-42e1-a174-cf6efd751b30" containerID="5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f" exitCode=143 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.969970 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerDied","Data":"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f"} Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972345 5000 generic.go:334] "Generic (PLEG): container finished" podID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerID="b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" exitCode=0 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972377 5000 generic.go:334] "Generic (PLEG): container finished" podID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerID="fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" exitCode=143 Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972388 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972425 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerDied","Data":"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f"} Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972470 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerDied","Data":"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667"} Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972486 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"454abdfe-e004-43d8-83fd-b3caae8f9354","Type":"ContainerDied","Data":"852cbd1ec148db676555bb7509e6b130588427cc189ed39b0c7f5c4f266da350"} Jan 05 21:53:54 crc kubenswrapper[5000]: I0105 21:53:54.972506 5000 scope.go:117] "RemoveContainer" containerID="b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.004243 5000 scope.go:117] "RemoveContainer" containerID="fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.004703 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.013642 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.025277 5000 scope.go:117] "RemoveContainer" containerID="b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.025598 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.025687 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f\": container with ID starting with b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f not found: ID does not exist" containerID="b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.025725 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f"} err="failed to get container status \"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f\": rpc error: code = NotFound desc = could not find container \"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f\": container with ID starting with b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f not found: ID does not exist" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.025756 5000 scope.go:117] "RemoveContainer" containerID="fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.025978 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e147c3d-cd84-4850-8ccc-9bd2c85c848a" containerName="nova-manage" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.025995 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e147c3d-cd84-4850-8ccc-9bd2c85c848a" containerName="nova-manage" Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.026007 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-api" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026016 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-api" Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.026049 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-log" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026056 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-log" Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.026004 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667\": container with ID starting with fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667 not found: ID does not exist" containerID="fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026110 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667"} err="failed to get container status \"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667\": rpc error: code = NotFound desc = could not find container \"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667\": container with ID starting with fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667 not found: ID does not exist" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026128 5000 scope.go:117] "RemoveContainer" containerID="b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026408 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-log" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026420 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e147c3d-cd84-4850-8ccc-9bd2c85c848a" containerName="nova-manage" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026437 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" containerName="nova-api-api" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026516 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f"} err="failed to get container status \"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f\": rpc error: code = NotFound desc = could not find container \"b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f\": container with ID starting with b5604f19b8d6734fcf19b08c97df53a6fa40f0f00029de96bf987cfaf1c6129f not found: ID does not exist" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026537 5000 scope.go:117] "RemoveContainer" containerID="fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.026749 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667"} err="failed to get container status \"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667\": rpc error: code = NotFound desc = could not find container \"fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667\": container with ID starting with fa98e4733e0716b3f5af88b3a30e46e647c4a7d97cae23200330d2ea7645e667 not found: ID does not exist" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.027403 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.033558 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.033773 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.033929 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.053191 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.123170 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.124507 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.125550 5000 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 05 21:53:55 crc kubenswrapper[5000]: E0105 21:53:55.125620 5000 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerName="nova-scheduler-scheduler" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129638 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129685 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggx89\" (UniqueName: \"kubernetes.io/projected/2c5dc335-0750-413c-a08d-6aaea2323daf-kube-api-access-ggx89\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129715 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-config-data\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129884 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c5dc335-0750-413c-a08d-6aaea2323daf-logs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129940 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.129961 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231379 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231429 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231472 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231498 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggx89\" (UniqueName: \"kubernetes.io/projected/2c5dc335-0750-413c-a08d-6aaea2323daf-kube-api-access-ggx89\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231523 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-config-data\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.231629 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c5dc335-0750-413c-a08d-6aaea2323daf-logs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.232437 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c5dc335-0750-413c-a08d-6aaea2323daf-logs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.236459 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.236650 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-config-data\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.236658 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.237318 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c5dc335-0750-413c-a08d-6aaea2323daf-public-tls-certs\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.250736 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggx89\" (UniqueName: \"kubernetes.io/projected/2c5dc335-0750-413c-a08d-6aaea2323daf-kube-api-access-ggx89\") pod \"nova-api-0\" (UID: \"2c5dc335-0750-413c-a08d-6aaea2323daf\") " pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.334547 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454abdfe-e004-43d8-83fd-b3caae8f9354" path="/var/lib/kubelet/pods/454abdfe-e004-43d8-83fd-b3caae8f9354/volumes" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.350407 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.807842 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 05 21:53:55 crc kubenswrapper[5000]: W0105 21:53:55.812506 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5dc335_0750_413c_a08d_6aaea2323daf.slice/crio-c5c39c9f9301bdd467e5c8262fb84d6b36b5db50775da224f05867ca51994254 WatchSource:0}: Error finding container c5c39c9f9301bdd467e5c8262fb84d6b36b5db50775da224f05867ca51994254: Status 404 returned error can't find the container with id c5c39c9f9301bdd467e5c8262fb84d6b36b5db50775da224f05867ca51994254 Jan 05 21:53:55 crc kubenswrapper[5000]: I0105 21:53:55.984346 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c5dc335-0750-413c-a08d-6aaea2323daf","Type":"ContainerStarted","Data":"c5c39c9f9301bdd467e5c8262fb84d6b36b5db50775da224f05867ca51994254"} Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.007271 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c5dc335-0750-413c-a08d-6aaea2323daf","Type":"ContainerStarted","Data":"bc7d1609ae930338422d21984c676f45b46257b2b5606a314cbc474d9945343a"} Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.007604 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c5dc335-0750-413c-a08d-6aaea2323daf","Type":"ContainerStarted","Data":"2bbe078e4e4bef3290595de9f9a1fd8cd5bb79929257c9068c91e25a54f76387"} Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.037013 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.036991954 podStartE2EDuration="2.036991954s" podCreationTimestamp="2026-01-05 21:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:53:57.025667601 +0000 UTC m=+1191.981870080" watchObservedRunningTime="2026-01-05 21:53:57.036991954 +0000 UTC m=+1191.993194433" Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.316163 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:41960->10.217.0.196:8775: read: connection reset by peer" Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.316563 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:41958->10.217.0.196:8775: read: connection reset by peer" Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.877710 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.981403 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs\") pod \"4a525a58-3825-42e1-a174-cf6efd751b30\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.981466 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs\") pod \"4a525a58-3825-42e1-a174-cf6efd751b30\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.981586 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle\") pod \"4a525a58-3825-42e1-a174-cf6efd751b30\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.981929 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data\") pod \"4a525a58-3825-42e1-a174-cf6efd751b30\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.981995 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6xl2\" (UniqueName: \"kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2\") pod \"4a525a58-3825-42e1-a174-cf6efd751b30\" (UID: \"4a525a58-3825-42e1-a174-cf6efd751b30\") " Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.984447 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs" (OuterVolumeSpecName: "logs") pod "4a525a58-3825-42e1-a174-cf6efd751b30" (UID: "4a525a58-3825-42e1-a174-cf6efd751b30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:53:57 crc kubenswrapper[5000]: I0105 21:53:57.988271 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2" (OuterVolumeSpecName: "kube-api-access-w6xl2") pod "4a525a58-3825-42e1-a174-cf6efd751b30" (UID: "4a525a58-3825-42e1-a174-cf6efd751b30"). InnerVolumeSpecName "kube-api-access-w6xl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.019258 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a525a58-3825-42e1-a174-cf6efd751b30" (UID: "4a525a58-3825-42e1-a174-cf6efd751b30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.020205 5000 generic.go:334] "Generic (PLEG): container finished" podID="4a525a58-3825-42e1-a174-cf6efd751b30" containerID="b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76" exitCode=0 Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.021031 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerDied","Data":"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76"} Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.021079 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a525a58-3825-42e1-a174-cf6efd751b30","Type":"ContainerDied","Data":"422c5029a500c9bcf5ba39c431f78348237bc3de33c2229247c4696733e7c209"} Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.021101 5000 scope.go:117] "RemoveContainer" containerID="b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.021422 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.027180 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data" (OuterVolumeSpecName: "config-data") pod "4a525a58-3825-42e1-a174-cf6efd751b30" (UID: "4a525a58-3825-42e1-a174-cf6efd751b30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.067601 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4a525a58-3825-42e1-a174-cf6efd751b30" (UID: "4a525a58-3825-42e1-a174-cf6efd751b30"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.085021 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.085054 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.085069 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6xl2\" (UniqueName: \"kubernetes.io/projected/4a525a58-3825-42e1-a174-cf6efd751b30-kube-api-access-w6xl2\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.085083 5000 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a525a58-3825-42e1-a174-cf6efd751b30-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.085094 5000 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a525a58-3825-42e1-a174-cf6efd751b30-logs\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.100290 5000 scope.go:117] "RemoveContainer" containerID="5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.117803 5000 scope.go:117] "RemoveContainer" containerID="b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76" Jan 05 21:53:58 crc kubenswrapper[5000]: E0105 21:53:58.118292 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76\": container with ID starting with b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76 not found: ID does not exist" containerID="b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.118351 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76"} err="failed to get container status \"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76\": rpc error: code = NotFound desc = could not find container \"b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76\": container with ID starting with b4d3bfd757243f933cf2b3b90c6a87107a029c78aa29f36a8499939877ad7b76 not found: ID does not exist" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.118388 5000 scope.go:117] "RemoveContainer" containerID="5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f" Jan 05 21:53:58 crc kubenswrapper[5000]: E0105 21:53:58.118762 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f\": container with ID starting with 5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f not found: ID does not exist" containerID="5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.118803 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f"} err="failed to get container status \"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f\": rpc error: code = NotFound desc = could not find container \"5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f\": container with ID starting with 5a4d4d738e2a18dd6384f140471e9354a77df767b241e8b9b57e0939c6cb0c2f not found: ID does not exist" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.362436 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.373095 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.394657 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:58 crc kubenswrapper[5000]: E0105 21:53:58.395215 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.395234 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" Jan 05 21:53:58 crc kubenswrapper[5000]: E0105 21:53:58.395271 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.395278 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.395502 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-log" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.395523 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" containerName="nova-metadata-metadata" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.396972 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.402502 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.403050 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.408828 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.492604 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ead96-f298-4707-b5aa-3f310fd71ade-logs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.492661 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-config-data\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.492856 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.493006 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.493115 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnjd\" (UniqueName: \"kubernetes.io/projected/ee3ead96-f298-4707-b5aa-3f310fd71ade-kube-api-access-hsnjd\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.595205 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsnjd\" (UniqueName: \"kubernetes.io/projected/ee3ead96-f298-4707-b5aa-3f310fd71ade-kube-api-access-hsnjd\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.595291 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ead96-f298-4707-b5aa-3f310fd71ade-logs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.595328 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-config-data\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.595392 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.595442 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.596377 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ead96-f298-4707-b5aa-3f310fd71ade-logs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.601420 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.601761 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.601676 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ead96-f298-4707-b5aa-3f310fd71ade-config-data\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.622445 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsnjd\" (UniqueName: \"kubernetes.io/projected/ee3ead96-f298-4707-b5aa-3f310fd71ade-kube-api-access-hsnjd\") pod \"nova-metadata-0\" (UID: \"ee3ead96-f298-4707-b5aa-3f310fd71ade\") " pod="openstack/nova-metadata-0" Jan 05 21:53:58 crc kubenswrapper[5000]: I0105 21:53:58.769512 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.036707 5000 generic.go:334] "Generic (PLEG): container finished" podID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerID="1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" exitCode=0 Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.037086 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50","Type":"ContainerDied","Data":"1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4"} Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.141857 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.203269 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdvt\" (UniqueName: \"kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt\") pod \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.203647 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle\") pod \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.203685 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data\") pod \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\" (UID: \"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50\") " Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.209969 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt" (OuterVolumeSpecName: "kube-api-access-9qdvt") pod "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" (UID: "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50"). InnerVolumeSpecName "kube-api-access-9qdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.237245 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" (UID: "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.239260 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data" (OuterVolumeSpecName: "config-data") pod "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" (UID: "0c0156ab-1f2c-40a9-b05e-3d29b25e7e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.305115 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.305149 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.305159 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qdvt\" (UniqueName: \"kubernetes.io/projected/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50-kube-api-access-9qdvt\") on node \"crc\" DevicePath \"\"" Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.309536 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 05 21:53:59 crc kubenswrapper[5000]: I0105 21:53:59.334723 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a525a58-3825-42e1-a174-cf6efd751b30" path="/var/lib/kubelet/pods/4a525a58-3825-42e1-a174-cf6efd751b30/volumes" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.047719 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee3ead96-f298-4707-b5aa-3f310fd71ade","Type":"ContainerStarted","Data":"7ad8927c0f250b911957d9709bf098faa48232ac39db5cc67e46c1609f9ff262"} Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.048093 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee3ead96-f298-4707-b5aa-3f310fd71ade","Type":"ContainerStarted","Data":"76455694b819798ea5f8213d44113b0f80d9286909b3810bf02e706d4751ce0f"} Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.048108 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee3ead96-f298-4707-b5aa-3f310fd71ade","Type":"ContainerStarted","Data":"2d45781ff36217f85ca9bf6a4168cbfa63802db01be1c3e86b6983fdd8ae6297"} Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.049296 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c0156ab-1f2c-40a9-b05e-3d29b25e7e50","Type":"ContainerDied","Data":"74701263492ffaac383a5d59871266eba243b1e775500fad304e714d556d1637"} Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.049332 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.049368 5000 scope.go:117] "RemoveContainer" containerID="1db24269b28bd8ef07a980a254c936ec0e3e2710fac7cad30d7ad05615e364a4" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.099071 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.099050196 podStartE2EDuration="2.099050196s" podCreationTimestamp="2026-01-05 21:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:00.068178946 +0000 UTC m=+1195.024381415" watchObservedRunningTime="2026-01-05 21:54:00.099050196 +0000 UTC m=+1195.055252675" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.110095 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.120074 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.127008 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:54:00 crc kubenswrapper[5000]: E0105 21:54:00.127416 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerName="nova-scheduler-scheduler" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.127433 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerName="nova-scheduler-scheduler" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.127616 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" containerName="nova-scheduler-scheduler" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.128323 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.136176 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.145956 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.330517 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.331023 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrmq\" (UniqueName: \"kubernetes.io/projected/a3923d31-eca2-40c4-b412-07b158c9fbcc-kube-api-access-4nrmq\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.331085 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-config-data\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.432341 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrmq\" (UniqueName: \"kubernetes.io/projected/a3923d31-eca2-40c4-b412-07b158c9fbcc-kube-api-access-4nrmq\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.432423 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-config-data\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.432464 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.438236 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-config-data\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.440921 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3923d31-eca2-40c4-b412-07b158c9fbcc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.447714 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrmq\" (UniqueName: \"kubernetes.io/projected/a3923d31-eca2-40c4-b412-07b158c9fbcc-kube-api-access-4nrmq\") pod \"nova-scheduler-0\" (UID: \"a3923d31-eca2-40c4-b412-07b158c9fbcc\") " pod="openstack/nova-scheduler-0" Jan 05 21:54:00 crc kubenswrapper[5000]: I0105 21:54:00.743237 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 05 21:54:01 crc kubenswrapper[5000]: I0105 21:54:01.226105 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 05 21:54:01 crc kubenswrapper[5000]: I0105 21:54:01.347860 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0156ab-1f2c-40a9-b05e-3d29b25e7e50" path="/var/lib/kubelet/pods/0c0156ab-1f2c-40a9-b05e-3d29b25e7e50/volumes" Jan 05 21:54:02 crc kubenswrapper[5000]: I0105 21:54:02.084587 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a3923d31-eca2-40c4-b412-07b158c9fbcc","Type":"ContainerStarted","Data":"1539972b692aef391e427dca950cadb48ef131ad0fe12cd82af99d34f50ca818"} Jan 05 21:54:02 crc kubenswrapper[5000]: I0105 21:54:02.084643 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a3923d31-eca2-40c4-b412-07b158c9fbcc","Type":"ContainerStarted","Data":"f3723e7c14b4aae1d71ff4231c24b9c49dca632901e54592f7ee3fa81a9c2bbd"} Jan 05 21:54:02 crc kubenswrapper[5000]: I0105 21:54:02.100026 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.09995243 podStartE2EDuration="2.09995243s" podCreationTimestamp="2026-01-05 21:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:02.098056146 +0000 UTC m=+1197.054258635" watchObservedRunningTime="2026-01-05 21:54:02.09995243 +0000 UTC m=+1197.056154899" Jan 05 21:54:03 crc kubenswrapper[5000]: I0105 21:54:03.770029 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:54:03 crc kubenswrapper[5000]: I0105 21:54:03.770686 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 05 21:54:05 crc kubenswrapper[5000]: I0105 21:54:05.351763 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:54:05 crc kubenswrapper[5000]: I0105 21:54:05.352143 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 05 21:54:05 crc kubenswrapper[5000]: I0105 21:54:05.744177 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 05 21:54:06 crc kubenswrapper[5000]: I0105 21:54:06.368105 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c5dc335-0750-413c-a08d-6aaea2323daf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:54:06 crc kubenswrapper[5000]: I0105 21:54:06.368102 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c5dc335-0750-413c-a08d-6aaea2323daf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:54:08 crc kubenswrapper[5000]: I0105 21:54:08.770195 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 21:54:08 crc kubenswrapper[5000]: I0105 21:54:08.770568 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 05 21:54:09 crc kubenswrapper[5000]: I0105 21:54:09.783034 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee3ead96-f298-4707-b5aa-3f310fd71ade" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:54:09 crc kubenswrapper[5000]: I0105 21:54:09.783070 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee3ead96-f298-4707-b5aa-3f310fd71ade" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 05 21:54:10 crc kubenswrapper[5000]: I0105 21:54:10.743868 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 05 21:54:10 crc kubenswrapper[5000]: I0105 21:54:10.770760 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 05 21:54:11 crc kubenswrapper[5000]: I0105 21:54:11.202081 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 05 21:54:15 crc kubenswrapper[5000]: I0105 21:54:15.360145 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 21:54:15 crc kubenswrapper[5000]: I0105 21:54:15.361082 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 21:54:15 crc kubenswrapper[5000]: I0105 21:54:15.363885 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 05 21:54:15 crc kubenswrapper[5000]: I0105 21:54:15.374368 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 21:54:16 crc kubenswrapper[5000]: I0105 21:54:16.228488 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 05 21:54:16 crc kubenswrapper[5000]: I0105 21:54:16.235316 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 05 21:54:18 crc kubenswrapper[5000]: I0105 21:54:18.777988 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 21:54:18 crc kubenswrapper[5000]: I0105 21:54:18.781791 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 05 21:54:18 crc kubenswrapper[5000]: I0105 21:54:18.788420 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 21:54:19 crc kubenswrapper[5000]: I0105 21:54:19.264151 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 05 21:54:19 crc kubenswrapper[5000]: I0105 21:54:19.380208 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 05 21:54:30 crc kubenswrapper[5000]: I0105 21:54:30.334182 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:31 crc kubenswrapper[5000]: I0105 21:54:31.291260 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:34 crc kubenswrapper[5000]: I0105 21:54:34.699780 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="rabbitmq" containerID="cri-o://db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be" gracePeriod=604796 Jan 05 21:54:35 crc kubenswrapper[5000]: I0105 21:54:35.245832 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="rabbitmq" containerID="cri-o://d516cc86801d2fef1efa27867fedb000b7acd6955f9965b5d9faba1cd6611430" gracePeriod=604797 Jan 05 21:54:38 crc kubenswrapper[5000]: I0105 21:54:38.841603 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 05 21:54:39 crc kubenswrapper[5000]: I0105 21:54:39.184563 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.310132 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.371618 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372077 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372167 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372272 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372298 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372371 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372434 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372492 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372535 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372570 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4fl\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.372604 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins\") pod \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\" (UID: \"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.374758 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.375668 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.377284 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.379011 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.385855 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl" (OuterVolumeSpecName: "kube-api-access-sc4fl") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "kube-api-access-sc4fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.390458 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info" (OuterVolumeSpecName: "pod-info") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.390603 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.397387 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.424461 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data" (OuterVolumeSpecName: "config-data") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.460245 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf" (OuterVolumeSpecName: "server-conf") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474606 5000 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474665 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474679 5000 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474692 5000 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474704 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4fl\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-kube-api-access-sc4fl\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474714 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474727 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474737 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474747 5000 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.474757 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.498957 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.511157 5000 generic.go:334] "Generic (PLEG): container finished" podID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerID="db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be" exitCode=0 Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.511206 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.511210 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerDied","Data":"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be"} Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.511283 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e","Type":"ContainerDied","Data":"39efdc7ffc1edd538df415d6797be3fc91211d8e1cd8202a568c9c172e45b9d9"} Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.511303 5000 scope.go:117] "RemoveContainer" containerID="db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.535385 5000 generic.go:334] "Generic (PLEG): container finished" podID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerID="d516cc86801d2fef1efa27867fedb000b7acd6955f9965b5d9faba1cd6611430" exitCode=0 Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.535432 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerDied","Data":"d516cc86801d2fef1efa27867fedb000b7acd6955f9965b5d9faba1cd6611430"} Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.540821 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" (UID: "03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.552783 5000 scope.go:117] "RemoveContainer" containerID="e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.576311 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.576348 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.579115 5000 scope.go:117] "RemoveContainer" containerID="db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be" Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.585123 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be\": container with ID starting with db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be not found: ID does not exist" containerID="db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.585264 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be"} err="failed to get container status \"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be\": rpc error: code = NotFound desc = could not find container \"db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be\": container with ID starting with db02c684bfc93b249b4a800cd88b0cdf838e618435ddc9d4a16848863837c9be not found: ID does not exist" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.585299 5000 scope.go:117] "RemoveContainer" containerID="e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a" Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.585782 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a\": container with ID starting with e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a not found: ID does not exist" containerID="e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.585811 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a"} err="failed to get container status \"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a\": rpc error: code = NotFound desc = could not find container \"e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a\": container with ID starting with e176a95266bbce415d6b9a50c016e5284a45a76f8998709371f840490feb885a not found: ID does not exist" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.753091 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779199 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779260 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779309 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779366 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779430 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779509 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779542 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j88w7\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779584 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779619 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779650 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.779704 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls\") pod \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\" (UID: \"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb\") " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.781722 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.782050 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.783755 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.793294 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7" (OuterVolumeSpecName: "kube-api-access-j88w7") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "kube-api-access-j88w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.796136 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.799408 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.799527 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.802992 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.813487 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data" (OuterVolumeSpecName: "config-data") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.874089 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884129 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884165 5000 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884186 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884195 5000 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-pod-info\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884207 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884215 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884224 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884233 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j88w7\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-kube-api-access-j88w7\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.884243 5000 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.893063 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.893739 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.912998 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.913436 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="setup-container" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913458 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="setup-container" Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.913470 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913478 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.913493 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="setup-container" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913501 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="setup-container" Jan 05 21:54:41 crc kubenswrapper[5000]: E0105 21:54:41.913513 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913520 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913782 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.913803 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" containerName="rabbitmq" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.915254 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.916267 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.920132 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.920263 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.920603 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.921547 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.921587 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pr5wp" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.921769 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.921825 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.927711 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.942301 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" (UID: "a5ef2bd8-5f44-4437-a0de-6d38dc153ffb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.987633 5000 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.987680 5000 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb-server-conf\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:41 crc kubenswrapper[5000]: I0105 21:54:41.987691 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090386 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090487 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffcf6bf3-6f91-4afe-ba08-9e058c831480-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090511 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffcf6bf3-6f91-4afe-ba08-9e058c831480-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090537 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5jl\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-kube-api-access-ms5jl\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090560 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090581 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090600 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090619 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090679 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090804 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.090907 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192281 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192365 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192436 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192510 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffcf6bf3-6f91-4afe-ba08-9e058c831480-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192531 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffcf6bf3-6f91-4afe-ba08-9e058c831480-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192556 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms5jl\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-kube-api-access-ms5jl\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192577 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192596 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192615 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192634 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.192661 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.193211 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.193430 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.194401 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.194491 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.196080 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.196196 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffcf6bf3-6f91-4afe-ba08-9e058c831480-config-data\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.200489 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ffcf6bf3-6f91-4afe-ba08-9e058c831480-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.200579 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.200670 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ffcf6bf3-6f91-4afe-ba08-9e058c831480-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.208223 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.221883 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms5jl\" (UniqueName: \"kubernetes.io/projected/ffcf6bf3-6f91-4afe-ba08-9e058c831480-kube-api-access-ms5jl\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.229178 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"ffcf6bf3-6f91-4afe-ba08-9e058c831480\") " pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.329804 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.553101 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a5ef2bd8-5f44-4437-a0de-6d38dc153ffb","Type":"ContainerDied","Data":"d7e91a0192b03cd77880800527b1731e6385200f0fb40c1ced4c34f8f2204046"} Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.553174 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.553484 5000 scope.go:117] "RemoveContainer" containerID="d516cc86801d2fef1efa27867fedb000b7acd6955f9965b5d9faba1cd6611430" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.586853 5000 scope.go:117] "RemoveContainer" containerID="af231ca02683df2a57ad6222bd1109d5d3b597c0c7de112a1efd70dd203cc63f" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.596473 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.605052 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.634645 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.638770 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.642664 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.642926 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.643121 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.643315 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.643448 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.643690 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-md6l9" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.643866 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.655792 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804008 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804331 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804385 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804420 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75p8g\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-kube-api-access-75p8g\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804466 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804494 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804553 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804597 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804654 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804683 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.804722 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.843663 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.905913 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.905983 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75p8g\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-kube-api-access-75p8g\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906028 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906053 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906096 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906123 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906159 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906181 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906205 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906242 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906260 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906700 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.906959 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.907757 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.908816 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.909865 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.910029 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.910152 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.910637 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.911386 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.912493 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.925410 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75p8g\" (UniqueName: \"kubernetes.io/projected/d62d32f0-a7e0-4949-82d3-5e35d8fbf43b-kube-api-access-75p8g\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.940795 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:42 crc kubenswrapper[5000]: I0105 21:54:42.968635 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.112724 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lrgqf"] Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.114790 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.116733 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.140480 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lrgqf"] Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215501 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215610 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215669 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215733 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215785 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215811 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqtd\" (UniqueName: \"kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.215907 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.243906 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.256581 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lrgqf"] Jan 05 21:54:43 crc kubenswrapper[5000]: E0105 21:54:43.257284 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-jnqtd openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" podUID="888566ed-04c0-4137-8a43-b164386f6438" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.294570 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n9pp4"] Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.297759 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.307607 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n9pp4"] Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323512 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323566 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-config\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323605 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323627 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnqtd\" (UniqueName: \"kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323659 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323692 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-svc\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323719 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323743 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323763 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323782 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323834 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323917 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323940 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkxf\" (UniqueName: \"kubernetes.io/projected/0814f5ce-cff2-445e-9207-664fdcb0e357-kube-api-access-nkkxf\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.323980 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.325370 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.325860 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.326737 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.327680 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.329413 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.329628 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.337090 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e" path="/var/lib/kubelet/pods/03b7b95a-c3aa-4eb3-8954-3a24c89f7d6e/volumes" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.347965 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnqtd\" (UniqueName: \"kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd\") pod \"dnsmasq-dns-79bd4cc8c9-lrgqf\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.350851 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ef2bd8-5f44-4437-a0de-6d38dc153ffb" path="/var/lib/kubelet/pods/a5ef2bd8-5f44-4437-a0de-6d38dc153ffb/volumes" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425366 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425406 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-config\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425457 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-svc\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425482 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425512 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425527 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.425614 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkxf\" (UniqueName: \"kubernetes.io/projected/0814f5ce-cff2-445e-9207-664fdcb0e357-kube-api-access-nkkxf\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.427194 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.427644 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.427662 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-config\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.427811 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.428496 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.428730 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0814f5ce-cff2-445e-9207-664fdcb0e357-dns-svc\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.444312 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkxf\" (UniqueName: \"kubernetes.io/projected/0814f5ce-cff2-445e-9207-664fdcb0e357-kube-api-access-nkkxf\") pod \"dnsmasq-dns-55478c4467-n9pp4\" (UID: \"0814f5ce-cff2-445e-9207-664fdcb0e357\") " pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.562849 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffcf6bf3-6f91-4afe-ba08-9e058c831480","Type":"ContainerStarted","Data":"68175004ff5ac811d3460d95d969b8c3042f4ce6657292d72e6a4073d84dac71"} Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.563683 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.564315 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b","Type":"ContainerStarted","Data":"b365bb86876278f3d6c3a6a5b8dc908a666721a2b253b96645942981a494b45b"} Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.577275 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.628863 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629254 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629366 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629473 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnqtd\" (UniqueName: \"kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629559 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629719 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629805 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config\") pod \"888566ed-04c0-4137-8a43-b164386f6438\" (UID: \"888566ed-04c0-4137-8a43-b164386f6438\") " Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629382 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629609 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.629845 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.630151 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.631079 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.631212 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config" (OuterVolumeSpecName: "config") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.634812 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd" (OuterVolumeSpecName: "kube-api-access-jnqtd") pod "888566ed-04c0-4137-8a43-b164386f6438" (UID: "888566ed-04c0-4137-8a43-b164386f6438"). InnerVolumeSpecName "kube-api-access-jnqtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.666661 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.732869 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnqtd\" (UniqueName: \"kubernetes.io/projected/888566ed-04c0-4137-8a43-b164386f6438-kube-api-access-jnqtd\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733263 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733273 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733285 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733294 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733302 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:43 crc kubenswrapper[5000]: I0105 21:54:43.733310 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888566ed-04c0-4137-8a43-b164386f6438-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.196085 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-n9pp4"] Jan 05 21:54:44 crc kubenswrapper[5000]: W0105 21:54:44.208121 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0814f5ce_cff2_445e_9207_664fdcb0e357.slice/crio-f8fe9793d6178959d7a260aef9ec6b78b67d02393ff37a12256edc4f7e416861 WatchSource:0}: Error finding container f8fe9793d6178959d7a260aef9ec6b78b67d02393ff37a12256edc4f7e416861: Status 404 returned error can't find the container with id f8fe9793d6178959d7a260aef9ec6b78b67d02393ff37a12256edc4f7e416861 Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.576502 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" event={"ID":"0814f5ce-cff2-445e-9207-664fdcb0e357","Type":"ContainerStarted","Data":"f8fe9793d6178959d7a260aef9ec6b78b67d02393ff37a12256edc4f7e416861"} Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.580837 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-lrgqf" Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.584048 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffcf6bf3-6f91-4afe-ba08-9e058c831480","Type":"ContainerStarted","Data":"f6db1986b84e8dde262cfa7daa87ce332f6b26b12334c19866c3dc4e36d3cf00"} Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.805533 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lrgqf"] Jan 05 21:54:44 crc kubenswrapper[5000]: I0105 21:54:44.813543 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-lrgqf"] Jan 05 21:54:45 crc kubenswrapper[5000]: I0105 21:54:45.334611 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888566ed-04c0-4137-8a43-b164386f6438" path="/var/lib/kubelet/pods/888566ed-04c0-4137-8a43-b164386f6438/volumes" Jan 05 21:54:45 crc kubenswrapper[5000]: I0105 21:54:45.591461 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b","Type":"ContainerStarted","Data":"f3eeb3d6a4c33e29d9adb2a88529e1fb31e199190f7fd6a6f57603aa90cd9328"} Jan 05 21:54:45 crc kubenswrapper[5000]: I0105 21:54:45.595691 5000 generic.go:334] "Generic (PLEG): container finished" podID="0814f5ce-cff2-445e-9207-664fdcb0e357" containerID="55ab9d2228e785f5747b01161da3fc3eaa8edfc053ca9b24a2cd0bdee3987297" exitCode=0 Jan 05 21:54:45 crc kubenswrapper[5000]: I0105 21:54:45.597055 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" event={"ID":"0814f5ce-cff2-445e-9207-664fdcb0e357","Type":"ContainerDied","Data":"55ab9d2228e785f5747b01161da3fc3eaa8edfc053ca9b24a2cd0bdee3987297"} Jan 05 21:54:46 crc kubenswrapper[5000]: I0105 21:54:46.611455 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" event={"ID":"0814f5ce-cff2-445e-9207-664fdcb0e357","Type":"ContainerStarted","Data":"9bd6e45e790afde605707be9f52c78194404c085a0b8468fcd226bcdc8d085bd"} Jan 05 21:54:46 crc kubenswrapper[5000]: I0105 21:54:46.638109 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" podStartSLOduration=3.638090041 podStartE2EDuration="3.638090041s" podCreationTimestamp="2026-01-05 21:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:54:46.629445475 +0000 UTC m=+1241.585647944" watchObservedRunningTime="2026-01-05 21:54:46.638090041 +0000 UTC m=+1241.594292510" Jan 05 21:54:47 crc kubenswrapper[5000]: I0105 21:54:47.618441 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:53 crc kubenswrapper[5000]: I0105 21:54:53.668145 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-n9pp4" Jan 05 21:54:53 crc kubenswrapper[5000]: I0105 21:54:53.742525 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:54:53 crc kubenswrapper[5000]: I0105 21:54:53.742968 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="dnsmasq-dns" containerID="cri-o://6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34" gracePeriod=10 Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.194844 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247747 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247798 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247843 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247869 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2zg5\" (UniqueName: \"kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247954 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.247975 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0\") pod \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\" (UID: \"eb9f5c4b-b0d7-42d7-bf63-06701667697b\") " Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.255074 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5" (OuterVolumeSpecName: "kube-api-access-p2zg5") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "kube-api-access-p2zg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.304429 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.304437 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.309849 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.330167 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config" (OuterVolumeSpecName: "config") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.332184 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb9f5c4b-b0d7-42d7-bf63-06701667697b" (UID: "eb9f5c4b-b0d7-42d7-bf63-06701667697b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350413 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350456 5000 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-config\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350471 5000 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350486 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2zg5\" (UniqueName: \"kubernetes.io/projected/eb9f5c4b-b0d7-42d7-bf63-06701667697b-kube-api-access-p2zg5\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350501 5000 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.350515 5000 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9f5c4b-b0d7-42d7-bf63-06701667697b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.688266 5000 generic.go:334] "Generic (PLEG): container finished" podID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerID="6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34" exitCode=0 Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.688342 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.688327 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" event={"ID":"eb9f5c4b-b0d7-42d7-bf63-06701667697b","Type":"ContainerDied","Data":"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34"} Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.688500 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7csvm" event={"ID":"eb9f5c4b-b0d7-42d7-bf63-06701667697b","Type":"ContainerDied","Data":"a25894c4806e6e629fb46f41bddea0a476db29e79003cf46b1701710c7ab410c"} Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.688534 5000 scope.go:117] "RemoveContainer" containerID="6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.720447 5000 scope.go:117] "RemoveContainer" containerID="5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.728265 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.739496 5000 scope.go:117] "RemoveContainer" containerID="6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34" Jan 05 21:54:54 crc kubenswrapper[5000]: E0105 21:54:54.739966 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34\": container with ID starting with 6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34 not found: ID does not exist" containerID="6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.740003 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34"} err="failed to get container status \"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34\": rpc error: code = NotFound desc = could not find container \"6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34\": container with ID starting with 6d70b0ecdae014a4d3cf328837dfbf9c9bb1d60b9d4e22adafea85d8a6a0be34 not found: ID does not exist" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.740026 5000 scope.go:117] "RemoveContainer" containerID="5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46" Jan 05 21:54:54 crc kubenswrapper[5000]: E0105 21:54:54.740235 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46\": container with ID starting with 5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46 not found: ID does not exist" containerID="5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.740266 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46"} err="failed to get container status \"5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46\": rpc error: code = NotFound desc = could not find container \"5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46\": container with ID starting with 5b3fe0a91ef6f525bd44596e83016e7802c0c23c5d632e8640ce24e41ca34b46 not found: ID does not exist" Jan 05 21:54:54 crc kubenswrapper[5000]: I0105 21:54:54.742225 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7csvm"] Jan 05 21:54:55 crc kubenswrapper[5000]: I0105 21:54:55.333185 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" path="/var/lib/kubelet/pods/eb9f5c4b-b0d7-42d7-bf63-06701667697b/volumes" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.314793 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh"] Jan 05 21:55:02 crc kubenswrapper[5000]: E0105 21:55:02.315865 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="dnsmasq-dns" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.315881 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="dnsmasq-dns" Jan 05 21:55:02 crc kubenswrapper[5000]: E0105 21:55:02.315935 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="init" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.315942 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="init" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.316156 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9f5c4b-b0d7-42d7-bf63-06701667697b" containerName="dnsmasq-dns" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.317159 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.320855 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.321149 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.322736 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.323051 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.326416 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh"] Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.398099 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.398484 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbzk\" (UniqueName: \"kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.398612 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.398997 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.503202 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.503660 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.503795 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbzk\" (UniqueName: \"kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.503926 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.512002 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.516533 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.517606 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.527746 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbzk\" (UniqueName: \"kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:02 crc kubenswrapper[5000]: I0105 21:55:02.645683 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:03 crc kubenswrapper[5000]: I0105 21:55:03.167066 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh"] Jan 05 21:55:03 crc kubenswrapper[5000]: W0105 21:55:03.169115 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ec2645_0703_42ad_96da_136ceb8b9cda.slice/crio-f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c WatchSource:0}: Error finding container f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c: Status 404 returned error can't find the container with id f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c Jan 05 21:55:03 crc kubenswrapper[5000]: I0105 21:55:03.171694 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 21:55:03 crc kubenswrapper[5000]: I0105 21:55:03.773838 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" event={"ID":"61ec2645-0703-42ad-96da-136ceb8b9cda","Type":"ContainerStarted","Data":"f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c"} Jan 05 21:55:11 crc kubenswrapper[5000]: I0105 21:55:11.201737 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:55:11 crc kubenswrapper[5000]: I0105 21:55:11.884936 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" event={"ID":"61ec2645-0703-42ad-96da-136ceb8b9cda","Type":"ContainerStarted","Data":"8b2300954c86092fa75bf0ece364cdb26f24c04dd84fb934e2f14c8fef5d7967"} Jan 05 21:55:11 crc kubenswrapper[5000]: I0105 21:55:11.908091 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" podStartSLOduration=1.8804926819999999 podStartE2EDuration="9.908074535s" podCreationTimestamp="2026-01-05 21:55:02 +0000 UTC" firstStartedPulling="2026-01-05 21:55:03.171402538 +0000 UTC m=+1258.127605017" lastFinishedPulling="2026-01-05 21:55:11.198984391 +0000 UTC m=+1266.155186870" observedRunningTime="2026-01-05 21:55:11.896389262 +0000 UTC m=+1266.852591731" watchObservedRunningTime="2026-01-05 21:55:11.908074535 +0000 UTC m=+1266.864277004" Jan 05 21:55:16 crc kubenswrapper[5000]: I0105 21:55:16.941273 5000 generic.go:334] "Generic (PLEG): container finished" podID="d62d32f0-a7e0-4949-82d3-5e35d8fbf43b" containerID="f3eeb3d6a4c33e29d9adb2a88529e1fb31e199190f7fd6a6f57603aa90cd9328" exitCode=0 Jan 05 21:55:16 crc kubenswrapper[5000]: I0105 21:55:16.941340 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b","Type":"ContainerDied","Data":"f3eeb3d6a4c33e29d9adb2a88529e1fb31e199190f7fd6a6f57603aa90cd9328"} Jan 05 21:55:16 crc kubenswrapper[5000]: I0105 21:55:16.943822 5000 generic.go:334] "Generic (PLEG): container finished" podID="ffcf6bf3-6f91-4afe-ba08-9e058c831480" containerID="f6db1986b84e8dde262cfa7daa87ce332f6b26b12334c19866c3dc4e36d3cf00" exitCode=0 Jan 05 21:55:16 crc kubenswrapper[5000]: I0105 21:55:16.943848 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffcf6bf3-6f91-4afe-ba08-9e058c831480","Type":"ContainerDied","Data":"f6db1986b84e8dde262cfa7daa87ce332f6b26b12334c19866c3dc4e36d3cf00"} Jan 05 21:55:17 crc kubenswrapper[5000]: I0105 21:55:17.965068 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ffcf6bf3-6f91-4afe-ba08-9e058c831480","Type":"ContainerStarted","Data":"33958379e364b1bc2ee5700fa4a130008f5a1a949b51a642484fee7413dc77dd"} Jan 05 21:55:17 crc kubenswrapper[5000]: I0105 21:55:17.965738 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 05 21:55:17 crc kubenswrapper[5000]: I0105 21:55:17.968368 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d62d32f0-a7e0-4949-82d3-5e35d8fbf43b","Type":"ContainerStarted","Data":"084237b57a2fcfd992ff2c1972fb10a52ad86deaeba36c935c5d5f9fd9496d89"} Jan 05 21:55:17 crc kubenswrapper[5000]: I0105 21:55:17.968638 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:55:18 crc kubenswrapper[5000]: I0105 21:55:18.020783 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.020768394 podStartE2EDuration="37.020768394s" podCreationTimestamp="2026-01-05 21:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:55:17.994437473 +0000 UTC m=+1272.950639942" watchObservedRunningTime="2026-01-05 21:55:18.020768394 +0000 UTC m=+1272.976970863" Jan 05 21:55:18 crc kubenswrapper[5000]: I0105 21:55:18.024031 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.024022996 podStartE2EDuration="36.024022996s" podCreationTimestamp="2026-01-05 21:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 21:55:18.017598103 +0000 UTC m=+1272.973800592" watchObservedRunningTime="2026-01-05 21:55:18.024022996 +0000 UTC m=+1272.980225465" Jan 05 21:55:23 crc kubenswrapper[5000]: I0105 21:55:23.013513 5000 generic.go:334] "Generic (PLEG): container finished" podID="61ec2645-0703-42ad-96da-136ceb8b9cda" containerID="8b2300954c86092fa75bf0ece364cdb26f24c04dd84fb934e2f14c8fef5d7967" exitCode=0 Jan 05 21:55:23 crc kubenswrapper[5000]: I0105 21:55:23.013608 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" event={"ID":"61ec2645-0703-42ad-96da-136ceb8b9cda","Type":"ContainerDied","Data":"8b2300954c86092fa75bf0ece364cdb26f24c04dd84fb934e2f14c8fef5d7967"} Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.134097 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.236780 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle\") pod \"61ec2645-0703-42ad-96da-136ceb8b9cda\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.236878 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory\") pod \"61ec2645-0703-42ad-96da-136ceb8b9cda\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.237057 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jbzk\" (UniqueName: \"kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk\") pod \"61ec2645-0703-42ad-96da-136ceb8b9cda\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.237200 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key\") pod \"61ec2645-0703-42ad-96da-136ceb8b9cda\" (UID: \"61ec2645-0703-42ad-96da-136ceb8b9cda\") " Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.242320 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk" (OuterVolumeSpecName: "kube-api-access-2jbzk") pod "61ec2645-0703-42ad-96da-136ceb8b9cda" (UID: "61ec2645-0703-42ad-96da-136ceb8b9cda"). InnerVolumeSpecName "kube-api-access-2jbzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.252199 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "61ec2645-0703-42ad-96da-136ceb8b9cda" (UID: "61ec2645-0703-42ad-96da-136ceb8b9cda"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.265606 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "61ec2645-0703-42ad-96da-136ceb8b9cda" (UID: "61ec2645-0703-42ad-96da-136ceb8b9cda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.269088 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory" (OuterVolumeSpecName: "inventory") pod "61ec2645-0703-42ad-96da-136ceb8b9cda" (UID: "61ec2645-0703-42ad-96da-136ceb8b9cda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.339646 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jbzk\" (UniqueName: \"kubernetes.io/projected/61ec2645-0703-42ad-96da-136ceb8b9cda-kube-api-access-2jbzk\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.340019 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.340034 5000 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:25 crc kubenswrapper[5000]: I0105 21:55:25.340045 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61ec2645-0703-42ad-96da-136ceb8b9cda-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.041800 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" event={"ID":"61ec2645-0703-42ad-96da-136ceb8b9cda","Type":"ContainerDied","Data":"f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c"} Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.041840 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d57a39f91f002199d777a7c7c68cd404517fd30059abc79082c0d7d21f908c" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.041956 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.249174 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276"] Jan 05 21:55:26 crc kubenswrapper[5000]: E0105 21:55:26.249547 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ec2645-0703-42ad-96da-136ceb8b9cda" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.249560 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ec2645-0703-42ad-96da-136ceb8b9cda" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.249757 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ec2645-0703-42ad-96da-136ceb8b9cda" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.250433 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.253317 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.253386 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.253492 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.253316 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.259475 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276"] Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.358083 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.358278 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.358442 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfbl\" (UniqueName: \"kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.459682 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.459784 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfbl\" (UniqueName: \"kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.460728 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.473685 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.475871 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.482809 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfbl\" (UniqueName: \"kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8l276\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:26 crc kubenswrapper[5000]: I0105 21:55:26.564780 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:27 crc kubenswrapper[5000]: I0105 21:55:27.065066 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276"] Jan 05 21:55:28 crc kubenswrapper[5000]: I0105 21:55:28.061760 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" event={"ID":"7d8b6f53-b39a-4cd8-9587-92cd0f427528","Type":"ContainerStarted","Data":"3acb3898ad097c0fde959e9b23da83adeeb5f4efa5f23044b0e550b528679e53"} Jan 05 21:55:28 crc kubenswrapper[5000]: I0105 21:55:28.062136 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" event={"ID":"7d8b6f53-b39a-4cd8-9587-92cd0f427528","Type":"ContainerStarted","Data":"83dc2728062e93208ce36a037cf98f36e08a70af018caa7133a4fa7a6305e521"} Jan 05 21:55:28 crc kubenswrapper[5000]: I0105 21:55:28.077690 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" podStartSLOduration=1.636971301 podStartE2EDuration="2.077670829s" podCreationTimestamp="2026-01-05 21:55:26 +0000 UTC" firstStartedPulling="2026-01-05 21:55:27.065503598 +0000 UTC m=+1282.021706067" lastFinishedPulling="2026-01-05 21:55:27.506203126 +0000 UTC m=+1282.462405595" observedRunningTime="2026-01-05 21:55:28.074226001 +0000 UTC m=+1283.030428490" watchObservedRunningTime="2026-01-05 21:55:28.077670829 +0000 UTC m=+1283.033873298" Jan 05 21:55:31 crc kubenswrapper[5000]: I0105 21:55:31.086294 5000 generic.go:334] "Generic (PLEG): container finished" podID="7d8b6f53-b39a-4cd8-9587-92cd0f427528" containerID="3acb3898ad097c0fde959e9b23da83adeeb5f4efa5f23044b0e550b528679e53" exitCode=0 Jan 05 21:55:31 crc kubenswrapper[5000]: I0105 21:55:31.086367 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" event={"ID":"7d8b6f53-b39a-4cd8-9587-92cd0f427528","Type":"ContainerDied","Data":"3acb3898ad097c0fde959e9b23da83adeeb5f4efa5f23044b0e550b528679e53"} Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.334174 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.645608 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.779772 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key\") pod \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.780155 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory\") pod \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.780196 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksfbl\" (UniqueName: \"kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl\") pod \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\" (UID: \"7d8b6f53-b39a-4cd8-9587-92cd0f427528\") " Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.785688 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl" (OuterVolumeSpecName: "kube-api-access-ksfbl") pod "7d8b6f53-b39a-4cd8-9587-92cd0f427528" (UID: "7d8b6f53-b39a-4cd8-9587-92cd0f427528"). InnerVolumeSpecName "kube-api-access-ksfbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.810516 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory" (OuterVolumeSpecName: "inventory") pod "7d8b6f53-b39a-4cd8-9587-92cd0f427528" (UID: "7d8b6f53-b39a-4cd8-9587-92cd0f427528"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.810699 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7d8b6f53-b39a-4cd8-9587-92cd0f427528" (UID: "7d8b6f53-b39a-4cd8-9587-92cd0f427528"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.882652 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.882692 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksfbl\" (UniqueName: \"kubernetes.io/projected/7d8b6f53-b39a-4cd8-9587-92cd0f427528-kube-api-access-ksfbl\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.882704 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d8b6f53-b39a-4cd8-9587-92cd0f427528-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:55:32 crc kubenswrapper[5000]: I0105 21:55:32.972104 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.105543 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" event={"ID":"7d8b6f53-b39a-4cd8-9587-92cd0f427528","Type":"ContainerDied","Data":"83dc2728062e93208ce36a037cf98f36e08a70af018caa7133a4fa7a6305e521"} Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.105722 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83dc2728062e93208ce36a037cf98f36e08a70af018caa7133a4fa7a6305e521" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.105843 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8l276" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.221123 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm"] Jan 05 21:55:33 crc kubenswrapper[5000]: E0105 21:55:33.222080 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8b6f53-b39a-4cd8-9587-92cd0f427528" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.222098 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8b6f53-b39a-4cd8-9587-92cd0f427528" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.222533 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8b6f53-b39a-4cd8-9587-92cd0f427528" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.223790 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.226348 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.226398 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.226883 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.227371 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.244564 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm"] Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.329081 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8dg\" (UniqueName: \"kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.329192 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.329321 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.329341 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.430670 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8dg\" (UniqueName: \"kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.430761 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.430864 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.430899 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.437384 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.438444 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.449971 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8dg\" (UniqueName: \"kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.454378 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:33 crc kubenswrapper[5000]: I0105 21:55:33.553254 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:55:34 crc kubenswrapper[5000]: I0105 21:55:34.132541 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm"] Jan 05 21:55:35 crc kubenswrapper[5000]: I0105 21:55:35.123266 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" event={"ID":"a03fd86d-bb7e-48cb-b37e-f94231148420","Type":"ContainerStarted","Data":"a56cf417ccd02c4c8445542a776dc7519aff801f222273c973e5664ddf900aa7"} Jan 05 21:55:35 crc kubenswrapper[5000]: I0105 21:55:35.123605 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" event={"ID":"a03fd86d-bb7e-48cb-b37e-f94231148420","Type":"ContainerStarted","Data":"837d1efff7fe696a5210486d9f0801c6076705116ad63d4b9232bacd6700e6bf"} Jan 05 21:55:35 crc kubenswrapper[5000]: I0105 21:55:35.146573 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" podStartSLOduration=1.541854593 podStartE2EDuration="2.146551873s" podCreationTimestamp="2026-01-05 21:55:33 +0000 UTC" firstStartedPulling="2026-01-05 21:55:34.134267539 +0000 UTC m=+1289.090470008" lastFinishedPulling="2026-01-05 21:55:34.738964819 +0000 UTC m=+1289.695167288" observedRunningTime="2026-01-05 21:55:35.141761717 +0000 UTC m=+1290.097964206" watchObservedRunningTime="2026-01-05 21:55:35.146551873 +0000 UTC m=+1290.102754342" Jan 05 21:55:53 crc kubenswrapper[5000]: I0105 21:55:53.098956 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:55:53 crc kubenswrapper[5000]: I0105 21:55:53.099483 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:56:17 crc kubenswrapper[5000]: I0105 21:56:17.532822 5000 scope.go:117] "RemoveContainer" containerID="260ef748ee1a072cda11b02968497e0db6064b1440c5c7d18503161ecb68b20a" Jan 05 21:56:17 crc kubenswrapper[5000]: I0105 21:56:17.558857 5000 scope.go:117] "RemoveContainer" containerID="c71735cb13e7fc7d9c43c4f17398961bf9a429bbd87abee69882634e05dd7601" Jan 05 21:56:23 crc kubenswrapper[5000]: I0105 21:56:23.098521 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:56:23 crc kubenswrapper[5000]: I0105 21:56:23.099107 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.098646 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.099180 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.099229 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.099702 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.099747 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67" gracePeriod=600 Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.921239 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67" exitCode=0 Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.921325 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67"} Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.921859 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269"} Jan 05 21:56:53 crc kubenswrapper[5000]: I0105 21:56:53.921883 5000 scope.go:117] "RemoveContainer" containerID="2afb4d5d8baa55f032a268f19c9c0e64f3bcb79bfc34f77baf7addae2164ef7a" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.643723 5000 scope.go:117] "RemoveContainer" containerID="13b527218d5c31ca5dcfe8d50ac62803b8c8139beabc6c2b5cdace5c3f14ddf4" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.675036 5000 scope.go:117] "RemoveContainer" containerID="13d7266a89d384890b7542fe2dfe9a69631446ba60c49be4c8488734f7c2bf46" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.738477 5000 scope.go:117] "RemoveContainer" containerID="6fbf63f7a69f12c3bbd706d2b6603fc029c2561731b5d0f6af53914a2beb5679" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.764743 5000 scope.go:117] "RemoveContainer" containerID="e0b18519ef40111898fa1ffb641755a894426e7ecc98e8f0fd4a230ad39f2bc5" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.832632 5000 scope.go:117] "RemoveContainer" containerID="df5591c32be8cfbebf5f04a8281f2d2b3995308349b30d071a5e88c1b77ca279" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.879223 5000 scope.go:117] "RemoveContainer" containerID="fe63e94b35090b3122a7827eb7ef966253d86322302c8379c303531b66412252" Jan 05 21:57:17 crc kubenswrapper[5000]: I0105 21:57:17.920864 5000 scope.go:117] "RemoveContainer" containerID="06c8e285cb3c449ce058e0732d6d7b991ed9292f3ee0e350a56fda27f5346c8a" Jan 05 21:58:40 crc kubenswrapper[5000]: I0105 21:58:40.898108 5000 generic.go:334] "Generic (PLEG): container finished" podID="a03fd86d-bb7e-48cb-b37e-f94231148420" containerID="a56cf417ccd02c4c8445542a776dc7519aff801f222273c973e5664ddf900aa7" exitCode=0 Jan 05 21:58:40 crc kubenswrapper[5000]: I0105 21:58:40.898184 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" event={"ID":"a03fd86d-bb7e-48cb-b37e-f94231148420","Type":"ContainerDied","Data":"a56cf417ccd02c4c8445542a776dc7519aff801f222273c973e5664ddf900aa7"} Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.365377 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.464848 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m8dg\" (UniqueName: \"kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg\") pod \"a03fd86d-bb7e-48cb-b37e-f94231148420\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.465075 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory\") pod \"a03fd86d-bb7e-48cb-b37e-f94231148420\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.465201 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key\") pod \"a03fd86d-bb7e-48cb-b37e-f94231148420\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.465257 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle\") pod \"a03fd86d-bb7e-48cb-b37e-f94231148420\" (UID: \"a03fd86d-bb7e-48cb-b37e-f94231148420\") " Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.472234 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a03fd86d-bb7e-48cb-b37e-f94231148420" (UID: "a03fd86d-bb7e-48cb-b37e-f94231148420"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.472287 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg" (OuterVolumeSpecName: "kube-api-access-9m8dg") pod "a03fd86d-bb7e-48cb-b37e-f94231148420" (UID: "a03fd86d-bb7e-48cb-b37e-f94231148420"). InnerVolumeSpecName "kube-api-access-9m8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.509092 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a03fd86d-bb7e-48cb-b37e-f94231148420" (UID: "a03fd86d-bb7e-48cb-b37e-f94231148420"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.509191 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory" (OuterVolumeSpecName: "inventory") pod "a03fd86d-bb7e-48cb-b37e-f94231148420" (UID: "a03fd86d-bb7e-48cb-b37e-f94231148420"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.567191 5000 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.567229 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m8dg\" (UniqueName: \"kubernetes.io/projected/a03fd86d-bb7e-48cb-b37e-f94231148420-kube-api-access-9m8dg\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.567244 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.567257 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a03fd86d-bb7e-48cb-b37e-f94231148420-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.926725 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" event={"ID":"a03fd86d-bb7e-48cb-b37e-f94231148420","Type":"ContainerDied","Data":"837d1efff7fe696a5210486d9f0801c6076705116ad63d4b9232bacd6700e6bf"} Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.926782 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837d1efff7fe696a5210486d9f0801c6076705116ad63d4b9232bacd6700e6bf" Jan 05 21:58:42 crc kubenswrapper[5000]: I0105 21:58:42.926869 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.028164 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst"] Jan 05 21:58:43 crc kubenswrapper[5000]: E0105 21:58:43.028567 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03fd86d-bb7e-48cb-b37e-f94231148420" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.028588 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03fd86d-bb7e-48cb-b37e-f94231148420" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.028765 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03fd86d-bb7e-48cb-b37e-f94231148420" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.029494 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.031781 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.033327 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.033578 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.033857 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.046101 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst"] Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.076670 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4jp\" (UniqueName: \"kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.076842 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.076931 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.179268 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4jp\" (UniqueName: \"kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.179745 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.179803 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.184968 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.190695 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.199684 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4jp\" (UniqueName: \"kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8zgst\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.362291 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.865214 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst"] Jan 05 21:58:43 crc kubenswrapper[5000]: W0105 21:58:43.872623 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65606fc1_6df2_4b19_8964_b69f04feb59b.slice/crio-110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf WatchSource:0}: Error finding container 110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf: Status 404 returned error can't find the container with id 110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf Jan 05 21:58:43 crc kubenswrapper[5000]: I0105 21:58:43.935394 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" event={"ID":"65606fc1-6df2-4b19-8964-b69f04feb59b","Type":"ContainerStarted","Data":"110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf"} Jan 05 21:58:44 crc kubenswrapper[5000]: I0105 21:58:44.944706 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" event={"ID":"65606fc1-6df2-4b19-8964-b69f04feb59b","Type":"ContainerStarted","Data":"589f320b3b4ada6cd187c55226223ca6f3553decd229756d0f884ffc11b0dbcd"} Jan 05 21:58:44 crc kubenswrapper[5000]: I0105 21:58:44.977149 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" podStartSLOduration=1.542222133 podStartE2EDuration="1.977122035s" podCreationTimestamp="2026-01-05 21:58:43 +0000 UTC" firstStartedPulling="2026-01-05 21:58:43.875123714 +0000 UTC m=+1478.831326183" lastFinishedPulling="2026-01-05 21:58:44.310023616 +0000 UTC m=+1479.266226085" observedRunningTime="2026-01-05 21:58:44.959576175 +0000 UTC m=+1479.915778664" watchObservedRunningTime="2026-01-05 21:58:44.977122035 +0000 UTC m=+1479.933324504" Jan 05 21:58:53 crc kubenswrapper[5000]: I0105 21:58:53.099710 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:58:53 crc kubenswrapper[5000]: I0105 21:58:53.100458 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.079520 5000 scope.go:117] "RemoveContainer" containerID="c00f510b237b67d04eacb2d7f0415530e083a5e8dc90bbb0a7a54669ac9e9835" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.124654 5000 scope.go:117] "RemoveContainer" containerID="7a6a4968715d9d44c7b8c778b6e54d185b03b1a688a862e746c6bb4413986aae" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.143231 5000 scope.go:117] "RemoveContainer" containerID="737409486df281fbf426801f1908806e96a2a2215e6479e4a88554f578cf3d85" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.160405 5000 scope.go:117] "RemoveContainer" containerID="af62ebcffaf50ea090758b452cc8f8eb625daf69dbeaddfd5644b8213c377b6f" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.180532 5000 scope.go:117] "RemoveContainer" containerID="197a1560a5e017a7f742d09093279dc14501c21cb7f45b07827286bb39bd06af" Jan 05 21:59:18 crc kubenswrapper[5000]: I0105 21:59:18.200977 5000 scope.go:117] "RemoveContainer" containerID="c2cd52adf8ca381cb68c9fbf1e51a9e9be2de43a151b13fd0b1aa7d2dab350e2" Jan 05 21:59:23 crc kubenswrapper[5000]: I0105 21:59:23.098918 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:59:23 crc kubenswrapper[5000]: I0105 21:59:23.099576 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.099096 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.099967 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.100049 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.101339 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.101471 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" gracePeriod=600 Jan 05 21:59:53 crc kubenswrapper[5000]: E0105 21:59:53.227367 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.624122 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" exitCode=0 Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.624176 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269"} Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.624214 5000 scope.go:117] "RemoveContainer" containerID="58dab0a989c1b4f585eb373d3bac27fc6e5066847040a7bbf02db8196a310e67" Jan 05 21:59:53 crc kubenswrapper[5000]: I0105 21:59:53.626081 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 21:59:53 crc kubenswrapper[5000]: E0105 21:59:53.626505 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.144845 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2"] Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.146789 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.149293 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.151041 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.154061 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2"] Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.229962 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pftqg\" (UniqueName: \"kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.230044 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.230081 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.332195 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pftqg\" (UniqueName: \"kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.332282 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.332316 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.333371 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.347655 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.351747 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pftqg\" (UniqueName: \"kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg\") pod \"collect-profiles-29460840-x6xx2\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.476321 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:00 crc kubenswrapper[5000]: I0105 22:00:00.925570 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2"] Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.040819 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-h8f2j"] Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.049429 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-h8f2j"] Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.068659 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5fa5-account-create-update-frhwv"] Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.081968 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5fa5-account-create-update-frhwv"] Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.333074 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0767d8af-09be-4773-abb0-0c31c01a4eda" path="/var/lib/kubelet/pods/0767d8af-09be-4773-abb0-0c31c01a4eda/volumes" Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.333866 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59c30fb-8d31-4a59-8ba3-ec838c4cd239" path="/var/lib/kubelet/pods/f59c30fb-8d31-4a59-8ba3-ec838c4cd239/volumes" Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.698945 5000 generic.go:334] "Generic (PLEG): container finished" podID="0dbb8eb8-3156-426c-bc78-8ca50985132d" containerID="d542a48daf53b6d25211369de6b392d0358391490d43e84717f1dd15856e95ba" exitCode=0 Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.698988 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" event={"ID":"0dbb8eb8-3156-426c-bc78-8ca50985132d","Type":"ContainerDied","Data":"d542a48daf53b6d25211369de6b392d0358391490d43e84717f1dd15856e95ba"} Jan 05 22:00:01 crc kubenswrapper[5000]: I0105 22:00:01.699012 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" event={"ID":"0dbb8eb8-3156-426c-bc78-8ca50985132d","Type":"ContainerStarted","Data":"6a89aab2e0843bbbe2a08378a3e59134f963e109c0a9dc212607346a4faf3cd9"} Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.011405 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.099251 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pftqg\" (UniqueName: \"kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg\") pod \"0dbb8eb8-3156-426c-bc78-8ca50985132d\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.099308 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume\") pod \"0dbb8eb8-3156-426c-bc78-8ca50985132d\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.099330 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume\") pod \"0dbb8eb8-3156-426c-bc78-8ca50985132d\" (UID: \"0dbb8eb8-3156-426c-bc78-8ca50985132d\") " Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.100140 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume" (OuterVolumeSpecName: "config-volume") pod "0dbb8eb8-3156-426c-bc78-8ca50985132d" (UID: "0dbb8eb8-3156-426c-bc78-8ca50985132d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.104985 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg" (OuterVolumeSpecName: "kube-api-access-pftqg") pod "0dbb8eb8-3156-426c-bc78-8ca50985132d" (UID: "0dbb8eb8-3156-426c-bc78-8ca50985132d"). InnerVolumeSpecName "kube-api-access-pftqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.106003 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0dbb8eb8-3156-426c-bc78-8ca50985132d" (UID: "0dbb8eb8-3156-426c-bc78-8ca50985132d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.201248 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pftqg\" (UniqueName: \"kubernetes.io/projected/0dbb8eb8-3156-426c-bc78-8ca50985132d-kube-api-access-pftqg\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.201288 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dbb8eb8-3156-426c-bc78-8ca50985132d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.201297 5000 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0dbb8eb8-3156-426c-bc78-8ca50985132d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.718135 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" event={"ID":"0dbb8eb8-3156-426c-bc78-8ca50985132d","Type":"ContainerDied","Data":"6a89aab2e0843bbbe2a08378a3e59134f963e109c0a9dc212607346a4faf3cd9"} Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.718418 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a89aab2e0843bbbe2a08378a3e59134f963e109c0a9dc212607346a4faf3cd9" Jan 05 22:00:03 crc kubenswrapper[5000]: I0105 22:00:03.718167 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460840-x6xx2" Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.031757 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f58-account-create-update-p696k"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.039817 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6f58-account-create-update-p696k"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.050749 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4wcrs"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.061462 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-p9fkk"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.069122 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3ea5-account-create-update-fl5lh"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.076341 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3ea5-account-create-update-fl5lh"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.085688 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-p9fkk"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.096488 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4wcrs"] Jan 05 22:00:08 crc kubenswrapper[5000]: I0105 22:00:08.323800 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:00:08 crc kubenswrapper[5000]: E0105 22:00:08.324343 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:09 crc kubenswrapper[5000]: I0105 22:00:09.334060 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a76d52-5034-45e8-a156-448f54eaafaa" path="/var/lib/kubelet/pods/13a76d52-5034-45e8-a156-448f54eaafaa/volumes" Jan 05 22:00:09 crc kubenswrapper[5000]: I0105 22:00:09.335040 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55530057-0b94-461e-a436-74813cb5ca59" path="/var/lib/kubelet/pods/55530057-0b94-461e-a436-74813cb5ca59/volumes" Jan 05 22:00:09 crc kubenswrapper[5000]: I0105 22:00:09.335535 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c60f7d-7fa7-46a1-94fd-a9d5547a14f6" path="/var/lib/kubelet/pods/60c60f7d-7fa7-46a1-94fd-a9d5547a14f6/volumes" Jan 05 22:00:09 crc kubenswrapper[5000]: I0105 22:00:09.336051 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4110eb0-802a-41f4-a920-dbc15a48cf98" path="/var/lib/kubelet/pods/a4110eb0-802a-41f4-a920-dbc15a48cf98/volumes" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.622830 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:15 crc kubenswrapper[5000]: E0105 22:00:15.623766 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbb8eb8-3156-426c-bc78-8ca50985132d" containerName="collect-profiles" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.623779 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbb8eb8-3156-426c-bc78-8ca50985132d" containerName="collect-profiles" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.623977 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbb8eb8-3156-426c-bc78-8ca50985132d" containerName="collect-profiles" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.625246 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.644383 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.776530 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.776621 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kzj\" (UniqueName: \"kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.776705 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.878923 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.879016 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kzj\" (UniqueName: \"kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.879118 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.879661 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.879682 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.902498 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kzj\" (UniqueName: \"kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj\") pod \"redhat-operators-rv84h\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:15 crc kubenswrapper[5000]: I0105 22:00:15.951716 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:16 crc kubenswrapper[5000]: I0105 22:00:16.447232 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:16 crc kubenswrapper[5000]: W0105 22:00:16.451802 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc93aa0_dfba_496e_a1f9_215eb951cd28.slice/crio-1a7686ce35e9af80ec3d18d044f323ff0dd304d08d97fbd505bbf366551d022e WatchSource:0}: Error finding container 1a7686ce35e9af80ec3d18d044f323ff0dd304d08d97fbd505bbf366551d022e: Status 404 returned error can't find the container with id 1a7686ce35e9af80ec3d18d044f323ff0dd304d08d97fbd505bbf366551d022e Jan 05 22:00:16 crc kubenswrapper[5000]: I0105 22:00:16.832818 5000 generic.go:334] "Generic (PLEG): container finished" podID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerID="3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1" exitCode=0 Jan 05 22:00:16 crc kubenswrapper[5000]: I0105 22:00:16.832874 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerDied","Data":"3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1"} Jan 05 22:00:16 crc kubenswrapper[5000]: I0105 22:00:16.833184 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerStarted","Data":"1a7686ce35e9af80ec3d18d044f323ff0dd304d08d97fbd505bbf366551d022e"} Jan 05 22:00:16 crc kubenswrapper[5000]: I0105 22:00:16.834173 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.275601 5000 scope.go:117] "RemoveContainer" containerID="86ac1be5b8fd07eb6a5b7511b1576fe354c871a85f0bf836d50b20fdc478950f" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.293855 5000 scope.go:117] "RemoveContainer" containerID="d5290dda34132c8bf757ea14eb389a8b3c8b4f01164a990ac6f30fb80a27df05" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.343142 5000 scope.go:117] "RemoveContainer" containerID="14b42868e27b501ad1061e46d5cd836c1b8cbb70acf1ce55f08a3fe82ed35eb9" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.384426 5000 scope.go:117] "RemoveContainer" containerID="79bb225cf83550573870dfdbb6439c21937df62ce093c2f514d9322f09d22208" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.444685 5000 scope.go:117] "RemoveContainer" containerID="9c9fb7c88b8c8099a7622257d7fd053bb7481afbe0448e437d504f365a5cbba6" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.479411 5000 scope.go:117] "RemoveContainer" containerID="9d64d2bdaf1244bc20d0a89b314aa6af0eb1d3f944d46266a1989ebe74de2b0f" Jan 05 22:00:18 crc kubenswrapper[5000]: I0105 22:00:18.856017 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerStarted","Data":"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6"} Jan 05 22:00:20 crc kubenswrapper[5000]: I0105 22:00:20.879499 5000 generic.go:334] "Generic (PLEG): container finished" podID="65606fc1-6df2-4b19-8964-b69f04feb59b" containerID="589f320b3b4ada6cd187c55226223ca6f3553decd229756d0f884ffc11b0dbcd" exitCode=0 Jan 05 22:00:20 crc kubenswrapper[5000]: I0105 22:00:20.879584 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" event={"ID":"65606fc1-6df2-4b19-8964-b69f04feb59b","Type":"ContainerDied","Data":"589f320b3b4ada6cd187c55226223ca6f3553decd229756d0f884ffc11b0dbcd"} Jan 05 22:00:20 crc kubenswrapper[5000]: I0105 22:00:20.882447 5000 generic.go:334] "Generic (PLEG): container finished" podID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerID="a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6" exitCode=0 Jan 05 22:00:20 crc kubenswrapper[5000]: I0105 22:00:20.882487 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerDied","Data":"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6"} Jan 05 22:00:21 crc kubenswrapper[5000]: I0105 22:00:21.324278 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:00:21 crc kubenswrapper[5000]: E0105 22:00:21.324988 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:21 crc kubenswrapper[5000]: I0105 22:00:21.892106 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerStarted","Data":"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b"} Jan 05 22:00:21 crc kubenswrapper[5000]: I0105 22:00:21.910095 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rv84h" podStartSLOduration=2.413745527 podStartE2EDuration="6.910077518s" podCreationTimestamp="2026-01-05 22:00:15 +0000 UTC" firstStartedPulling="2026-01-05 22:00:16.833958666 +0000 UTC m=+1571.790161135" lastFinishedPulling="2026-01-05 22:00:21.330290647 +0000 UTC m=+1576.286493126" observedRunningTime="2026-01-05 22:00:21.909392268 +0000 UTC m=+1576.865594737" watchObservedRunningTime="2026-01-05 22:00:21.910077518 +0000 UTC m=+1576.866279987" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.335606 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.504118 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key\") pod \"65606fc1-6df2-4b19-8964-b69f04feb59b\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.504212 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4jp\" (UniqueName: \"kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp\") pod \"65606fc1-6df2-4b19-8964-b69f04feb59b\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.504238 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory\") pod \"65606fc1-6df2-4b19-8964-b69f04feb59b\" (UID: \"65606fc1-6df2-4b19-8964-b69f04feb59b\") " Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.511074 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp" (OuterVolumeSpecName: "kube-api-access-rn4jp") pod "65606fc1-6df2-4b19-8964-b69f04feb59b" (UID: "65606fc1-6df2-4b19-8964-b69f04feb59b"). InnerVolumeSpecName "kube-api-access-rn4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.537112 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "65606fc1-6df2-4b19-8964-b69f04feb59b" (UID: "65606fc1-6df2-4b19-8964-b69f04feb59b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.549844 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory" (OuterVolumeSpecName: "inventory") pod "65606fc1-6df2-4b19-8964-b69f04feb59b" (UID: "65606fc1-6df2-4b19-8964-b69f04feb59b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.606512 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.606576 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4jp\" (UniqueName: \"kubernetes.io/projected/65606fc1-6df2-4b19-8964-b69f04feb59b-kube-api-access-rn4jp\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.606586 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65606fc1-6df2-4b19-8964-b69f04feb59b-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.906214 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" event={"ID":"65606fc1-6df2-4b19-8964-b69f04feb59b","Type":"ContainerDied","Data":"110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf"} Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.906274 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf" Jan 05 22:00:22 crc kubenswrapper[5000]: I0105 22:00:22.906294 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8zgst" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.003021 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:23 crc kubenswrapper[5000]: E0105 22:00:23.003467 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65606fc1-6df2-4b19-8964-b69f04feb59b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.003484 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="65606fc1-6df2-4b19-8964-b69f04feb59b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.003683 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="65606fc1-6df2-4b19-8964-b69f04feb59b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.005024 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.019150 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.029090 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.029209 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.029529 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh"] Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.029563 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8klx\" (UniqueName: \"kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.030680 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.032950 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.033218 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.034195 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.034326 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.088418 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh"] Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.131491 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.131591 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8klx\" (UniqueName: \"kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.131655 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.132208 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.133239 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.147943 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8klx\" (UniqueName: \"kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx\") pod \"redhat-marketplace-j5mxv\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: E0105 22:00:23.186562 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65606fc1_6df2_4b19_8964_b69f04feb59b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65606fc1_6df2_4b19_8964_b69f04feb59b.slice/crio-110911afc9b89a2855dc4eb4660d1134cb39a04434b2fc3edb9879a9c5ccafbf\": RecentStats: unable to find data in memory cache]" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.233196 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.233558 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.234373 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xkpx\" (UniqueName: \"kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.334826 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.336281 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.336375 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.336521 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xkpx\" (UniqueName: \"kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.340223 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.344212 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.356857 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xkpx\" (UniqueName: \"kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.651013 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.820530 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:23 crc kubenswrapper[5000]: W0105 22:00:23.828487 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0235ecd6_a255_404b_972d_2a43413f858f.slice/crio-f95e96247b76f44cd6f24e5fcdb3b11b6e40e74dd10f8101ed6414504e102edb WatchSource:0}: Error finding container f95e96247b76f44cd6f24e5fcdb3b11b6e40e74dd10f8101ed6414504e102edb: Status 404 returned error can't find the container with id f95e96247b76f44cd6f24e5fcdb3b11b6e40e74dd10f8101ed6414504e102edb Jan 05 22:00:23 crc kubenswrapper[5000]: I0105 22:00:23.922123 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerStarted","Data":"f95e96247b76f44cd6f24e5fcdb3b11b6e40e74dd10f8101ed6414504e102edb"} Jan 05 22:00:24 crc kubenswrapper[5000]: I0105 22:00:24.182245 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh"] Jan 05 22:00:24 crc kubenswrapper[5000]: W0105 22:00:24.209027 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85045115_6f3e_4624_9e9b_0db7e0a6419e.slice/crio-b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80 WatchSource:0}: Error finding container b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80: Status 404 returned error can't find the container with id b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80 Jan 05 22:00:24 crc kubenswrapper[5000]: I0105 22:00:24.951235 5000 generic.go:334] "Generic (PLEG): container finished" podID="0235ecd6-a255-404b-972d-2a43413f858f" containerID="f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe" exitCode=0 Jan 05 22:00:24 crc kubenswrapper[5000]: I0105 22:00:24.951542 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerDied","Data":"f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe"} Jan 05 22:00:24 crc kubenswrapper[5000]: I0105 22:00:24.954023 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" event={"ID":"85045115-6f3e-4624-9e9b-0db7e0a6419e","Type":"ContainerStarted","Data":"b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80"} Jan 05 22:00:25 crc kubenswrapper[5000]: I0105 22:00:25.951818 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:25 crc kubenswrapper[5000]: I0105 22:00:25.952177 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:25 crc kubenswrapper[5000]: I0105 22:00:25.967122 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" event={"ID":"85045115-6f3e-4624-9e9b-0db7e0a6419e","Type":"ContainerStarted","Data":"c9d9f33b59bb1953c755b4a428392deae0bbfbdaf9b34a60298ab779d549dc5c"} Jan 05 22:00:25 crc kubenswrapper[5000]: I0105 22:00:25.982196 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" podStartSLOduration=2.286690292 podStartE2EDuration="2.98217533s" podCreationTimestamp="2026-01-05 22:00:23 +0000 UTC" firstStartedPulling="2026-01-05 22:00:24.212537065 +0000 UTC m=+1579.168739534" lastFinishedPulling="2026-01-05 22:00:24.908022103 +0000 UTC m=+1579.864224572" observedRunningTime="2026-01-05 22:00:25.979159094 +0000 UTC m=+1580.935361573" watchObservedRunningTime="2026-01-05 22:00:25.98217533 +0000 UTC m=+1580.938377799" Jan 05 22:00:26 crc kubenswrapper[5000]: I0105 22:00:26.980765 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerStarted","Data":"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc"} Jan 05 22:00:27 crc kubenswrapper[5000]: I0105 22:00:27.013141 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rv84h" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="registry-server" probeResult="failure" output=< Jan 05 22:00:27 crc kubenswrapper[5000]: timeout: failed to connect service ":50051" within 1s Jan 05 22:00:27 crc kubenswrapper[5000]: > Jan 05 22:00:27 crc kubenswrapper[5000]: I0105 22:00:27.990225 5000 generic.go:334] "Generic (PLEG): container finished" podID="0235ecd6-a255-404b-972d-2a43413f858f" containerID="23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc" exitCode=0 Jan 05 22:00:27 crc kubenswrapper[5000]: I0105 22:00:27.990542 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerDied","Data":"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc"} Jan 05 22:00:28 crc kubenswrapper[5000]: I0105 22:00:28.999107 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerStarted","Data":"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033"} Jan 05 22:00:29 crc kubenswrapper[5000]: I0105 22:00:29.018850 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5mxv" podStartSLOduration=4.020626486 podStartE2EDuration="7.018806537s" podCreationTimestamp="2026-01-05 22:00:22 +0000 UTC" firstStartedPulling="2026-01-05 22:00:24.962986129 +0000 UTC m=+1579.919188598" lastFinishedPulling="2026-01-05 22:00:27.96116618 +0000 UTC m=+1582.917368649" observedRunningTime="2026-01-05 22:00:29.015335748 +0000 UTC m=+1583.971538217" watchObservedRunningTime="2026-01-05 22:00:29.018806537 +0000 UTC m=+1583.975009006" Jan 05 22:00:31 crc kubenswrapper[5000]: I0105 22:00:31.047165 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zg424"] Jan 05 22:00:31 crc kubenswrapper[5000]: I0105 22:00:31.054732 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zg424"] Jan 05 22:00:31 crc kubenswrapper[5000]: I0105 22:00:31.334679 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f" path="/var/lib/kubelet/pods/3f0e4b47-d6c4-4fed-b8ac-cc7a63155e1f/volumes" Jan 05 22:00:32 crc kubenswrapper[5000]: I0105 22:00:32.324485 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:00:32 crc kubenswrapper[5000]: E0105 22:00:32.325045 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:33 crc kubenswrapper[5000]: I0105 22:00:33.335177 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:33 crc kubenswrapper[5000]: I0105 22:00:33.335226 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:33 crc kubenswrapper[5000]: I0105 22:00:33.410391 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.044147 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dba2-account-create-update-pg6tz"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.059951 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dba2-account-create-update-pg6tz"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.070936 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-l4nvl"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.082298 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fa44-account-create-update-l84sv"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.089527 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.090160 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fa44-account-create-update-l84sv"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.098027 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-l4nvl"] Jan 05 22:00:34 crc kubenswrapper[5000]: I0105 22:00:34.133910 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.031378 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kq56w"] Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.042045 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-s2pv5"] Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.048903 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kq56w"] Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.060215 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-s2pv5"] Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.334354 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd9b04a-feba-4af2-a02f-be6af11c059c" path="/var/lib/kubelet/pods/1fd9b04a-feba-4af2-a02f-be6af11c059c/volumes" Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.335231 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37652792-2853-4edf-a1e4-c0f51291b3c4" path="/var/lib/kubelet/pods/37652792-2853-4edf-a1e4-c0f51291b3c4/volumes" Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.335807 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ff8f19-5fd1-41f3-b417-1d32146bad28" path="/var/lib/kubelet/pods/56ff8f19-5fd1-41f3-b417-1d32146bad28/volumes" Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.336350 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac82245a-da6c-4a0a-98a2-404935fbfb64" path="/var/lib/kubelet/pods/ac82245a-da6c-4a0a-98a2-404935fbfb64/volumes" Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.337372 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03a78cf-7207-491b-bdf2-dc30e3f70480" path="/var/lib/kubelet/pods/b03a78cf-7207-491b-bdf2-dc30e3f70480/volumes" Jan 05 22:00:35 crc kubenswrapper[5000]: I0105 22:00:35.999361 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.053458 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.061875 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5mxv" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="registry-server" containerID="cri-o://97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033" gracePeriod=2 Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.548964 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.592829 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8klx\" (UniqueName: \"kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx\") pod \"0235ecd6-a255-404b-972d-2a43413f858f\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.592931 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities\") pod \"0235ecd6-a255-404b-972d-2a43413f858f\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.592984 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content\") pod \"0235ecd6-a255-404b-972d-2a43413f858f\" (UID: \"0235ecd6-a255-404b-972d-2a43413f858f\") " Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.593919 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities" (OuterVolumeSpecName: "utilities") pod "0235ecd6-a255-404b-972d-2a43413f858f" (UID: "0235ecd6-a255-404b-972d-2a43413f858f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.599823 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx" (OuterVolumeSpecName: "kube-api-access-k8klx") pod "0235ecd6-a255-404b-972d-2a43413f858f" (UID: "0235ecd6-a255-404b-972d-2a43413f858f"). InnerVolumeSpecName "kube-api-access-k8klx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.618519 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0235ecd6-a255-404b-972d-2a43413f858f" (UID: "0235ecd6-a255-404b-972d-2a43413f858f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.695977 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8klx\" (UniqueName: \"kubernetes.io/projected/0235ecd6-a255-404b-972d-2a43413f858f-kube-api-access-k8klx\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.696015 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:36 crc kubenswrapper[5000]: I0105 22:00:36.696026 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0235ecd6-a255-404b-972d-2a43413f858f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.074834 5000 generic.go:334] "Generic (PLEG): container finished" podID="0235ecd6-a255-404b-972d-2a43413f858f" containerID="97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033" exitCode=0 Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.074917 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerDied","Data":"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033"} Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.074954 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5mxv" event={"ID":"0235ecd6-a255-404b-972d-2a43413f858f","Type":"ContainerDied","Data":"f95e96247b76f44cd6f24e5fcdb3b11b6e40e74dd10f8101ed6414504e102edb"} Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.074975 5000 scope.go:117] "RemoveContainer" containerID="97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.074999 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5mxv" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.109483 5000 scope.go:117] "RemoveContainer" containerID="23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.134282 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.146730 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5mxv"] Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.148869 5000 scope.go:117] "RemoveContainer" containerID="f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.194127 5000 scope.go:117] "RemoveContainer" containerID="97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033" Jan 05 22:00:37 crc kubenswrapper[5000]: E0105 22:00:37.194659 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033\": container with ID starting with 97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033 not found: ID does not exist" containerID="97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.194696 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033"} err="failed to get container status \"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033\": rpc error: code = NotFound desc = could not find container \"97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033\": container with ID starting with 97580b7ac017f14cf5471a6e39db53c6ad3b4e6f6b0ed209687010a2b2b8c033 not found: ID does not exist" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.194723 5000 scope.go:117] "RemoveContainer" containerID="23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc" Jan 05 22:00:37 crc kubenswrapper[5000]: E0105 22:00:37.195311 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc\": container with ID starting with 23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc not found: ID does not exist" containerID="23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.195336 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc"} err="failed to get container status \"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc\": rpc error: code = NotFound desc = could not find container \"23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc\": container with ID starting with 23ee892521e31a6e70619732ddfc9de4db1b8f215a26bb9dcc395558f94217dc not found: ID does not exist" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.195352 5000 scope.go:117] "RemoveContainer" containerID="f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe" Jan 05 22:00:37 crc kubenswrapper[5000]: E0105 22:00:37.195657 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe\": container with ID starting with f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe not found: ID does not exist" containerID="f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.195682 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe"} err="failed to get container status \"f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe\": rpc error: code = NotFound desc = could not find container \"f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe\": container with ID starting with f28d874174384c2b6e16317ee7af92cea209c1e1e35f8a8432c6fcd0ee18aefe not found: ID does not exist" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.334972 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0235ecd6-a255-404b-972d-2a43413f858f" path="/var/lib/kubelet/pods/0235ecd6-a255-404b-972d-2a43413f858f/volumes" Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.446166 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.446599 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rv84h" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="registry-server" containerID="cri-o://31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b" gracePeriod=2 Jan 05 22:00:37 crc kubenswrapper[5000]: I0105 22:00:37.978106 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.026266 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6kzj\" (UniqueName: \"kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj\") pod \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.026473 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content\") pod \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.026514 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities\") pod \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\" (UID: \"7dc93aa0-dfba-496e-a1f9-215eb951cd28\") " Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.027284 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities" (OuterVolumeSpecName: "utilities") pod "7dc93aa0-dfba-496e-a1f9-215eb951cd28" (UID: "7dc93aa0-dfba-496e-a1f9-215eb951cd28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.030063 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.032817 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj" (OuterVolumeSpecName: "kube-api-access-w6kzj") pod "7dc93aa0-dfba-496e-a1f9-215eb951cd28" (UID: "7dc93aa0-dfba-496e-a1f9-215eb951cd28"). InnerVolumeSpecName "kube-api-access-w6kzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.087724 5000 generic.go:334] "Generic (PLEG): container finished" podID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerID="31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b" exitCode=0 Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.087764 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerDied","Data":"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b"} Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.087797 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rv84h" event={"ID":"7dc93aa0-dfba-496e-a1f9-215eb951cd28","Type":"ContainerDied","Data":"1a7686ce35e9af80ec3d18d044f323ff0dd304d08d97fbd505bbf366551d022e"} Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.087808 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rv84h" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.087816 5000 scope.go:117] "RemoveContainer" containerID="31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.106492 5000 scope.go:117] "RemoveContainer" containerID="a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.127903 5000 scope.go:117] "RemoveContainer" containerID="3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.131499 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6kzj\" (UniqueName: \"kubernetes.io/projected/7dc93aa0-dfba-496e-a1f9-215eb951cd28-kube-api-access-w6kzj\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.137465 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dc93aa0-dfba-496e-a1f9-215eb951cd28" (UID: "7dc93aa0-dfba-496e-a1f9-215eb951cd28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.148944 5000 scope.go:117] "RemoveContainer" containerID="31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b" Jan 05 22:00:38 crc kubenswrapper[5000]: E0105 22:00:38.149377 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b\": container with ID starting with 31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b not found: ID does not exist" containerID="31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.149404 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b"} err="failed to get container status \"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b\": rpc error: code = NotFound desc = could not find container \"31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b\": container with ID starting with 31be0bee0001f83652d54ffe7607dd0040b58637d879bd8b0ba338dbace1153b not found: ID does not exist" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.149425 5000 scope.go:117] "RemoveContainer" containerID="a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6" Jan 05 22:00:38 crc kubenswrapper[5000]: E0105 22:00:38.149846 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6\": container with ID starting with a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6 not found: ID does not exist" containerID="a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.149886 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6"} err="failed to get container status \"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6\": rpc error: code = NotFound desc = could not find container \"a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6\": container with ID starting with a0f2090afdf026e775f9e4e0bc77ad6dc3ac4332604b7f10537009c9173765b6 not found: ID does not exist" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.149939 5000 scope.go:117] "RemoveContainer" containerID="3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1" Jan 05 22:00:38 crc kubenswrapper[5000]: E0105 22:00:38.150356 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1\": container with ID starting with 3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1 not found: ID does not exist" containerID="3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.150382 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1"} err="failed to get container status \"3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1\": rpc error: code = NotFound desc = could not find container \"3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1\": container with ID starting with 3b36266dd24e0342db393365a896aa8c2a74c217f2926c04c55f79bf7ae2aaa1 not found: ID does not exist" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.233270 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc93aa0-dfba-496e-a1f9-215eb951cd28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.431806 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:38 crc kubenswrapper[5000]: I0105 22:00:38.440171 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rv84h"] Jan 05 22:00:39 crc kubenswrapper[5000]: I0105 22:00:39.337122 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" path="/var/lib/kubelet/pods/7dc93aa0-dfba-496e-a1f9-215eb951cd28/volumes" Jan 05 22:00:40 crc kubenswrapper[5000]: I0105 22:00:40.024458 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5515-account-create-update-7wkj4"] Jan 05 22:00:40 crc kubenswrapper[5000]: I0105 22:00:40.032047 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5515-account-create-update-7wkj4"] Jan 05 22:00:41 crc kubenswrapper[5000]: I0105 22:00:41.334657 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa6ddda-7b19-4d81-b114-b887e43ce7e2" path="/var/lib/kubelet/pods/6fa6ddda-7b19-4d81-b114-b887e43ce7e2/volumes" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.644772 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645526 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="extract-content" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645540 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="extract-content" Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645568 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645574 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645593 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645598 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645611 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="extract-utilities" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645618 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="extract-utilities" Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645625 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="extract-utilities" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645632 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="extract-utilities" Jan 05 22:00:43 crc kubenswrapper[5000]: E0105 22:00:43.645650 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="extract-content" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645655 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="extract-content" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645832 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0235ecd6-a255-404b-972d-2a43413f858f" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.645852 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc93aa0-dfba-496e-a1f9-215eb951cd28" containerName="registry-server" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.647550 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.655759 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.728358 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.729563 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb229\" (UniqueName: \"kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.729853 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.831408 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.831472 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb229\" (UniqueName: \"kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.831514 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.832021 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.832093 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.849129 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb229\" (UniqueName: \"kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229\") pod \"certified-operators-b58t5\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:43 crc kubenswrapper[5000]: I0105 22:00:43.977669 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:44 crc kubenswrapper[5000]: I0105 22:00:44.550134 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.024181 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qrvsf"] Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.031592 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qrvsf"] Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.182474 5000 generic.go:334] "Generic (PLEG): container finished" podID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerID="15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882" exitCode=0 Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.182514 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerDied","Data":"15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882"} Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.182540 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerStarted","Data":"1f2a1eac491b71d949d37dcf425b5d881426357645380b66f2259178e6c950d5"} Jan 05 22:00:45 crc kubenswrapper[5000]: I0105 22:00:45.335163 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c462c92-9ae1-4351-bd0b-e97d442e2b6a" path="/var/lib/kubelet/pods/4c462c92-9ae1-4351-bd0b-e97d442e2b6a/volumes" Jan 05 22:00:46 crc kubenswrapper[5000]: I0105 22:00:46.039409 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6g5ww"] Jan 05 22:00:46 crc kubenswrapper[5000]: I0105 22:00:46.048415 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6g5ww"] Jan 05 22:00:47 crc kubenswrapper[5000]: I0105 22:00:47.200708 5000 generic.go:334] "Generic (PLEG): container finished" podID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerID="bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad" exitCode=0 Jan 05 22:00:47 crc kubenswrapper[5000]: I0105 22:00:47.200776 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerDied","Data":"bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad"} Jan 05 22:00:47 crc kubenswrapper[5000]: I0105 22:00:47.324250 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:00:47 crc kubenswrapper[5000]: E0105 22:00:47.324593 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:47 crc kubenswrapper[5000]: I0105 22:00:47.344286 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e46dcd5-83ef-4a7b-a07b-a850071a330c" path="/var/lib/kubelet/pods/8e46dcd5-83ef-4a7b-a07b-a850071a330c/volumes" Jan 05 22:00:48 crc kubenswrapper[5000]: I0105 22:00:48.210751 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerStarted","Data":"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4"} Jan 05 22:00:48 crc kubenswrapper[5000]: I0105 22:00:48.242986 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b58t5" podStartSLOduration=2.7017312479999998 podStartE2EDuration="5.242965629s" podCreationTimestamp="2026-01-05 22:00:43 +0000 UTC" firstStartedPulling="2026-01-05 22:00:45.184230782 +0000 UTC m=+1600.140433251" lastFinishedPulling="2026-01-05 22:00:47.725465163 +0000 UTC m=+1602.681667632" observedRunningTime="2026-01-05 22:00:48.234981832 +0000 UTC m=+1603.191184321" watchObservedRunningTime="2026-01-05 22:00:48.242965629 +0000 UTC m=+1603.199168088" Jan 05 22:00:53 crc kubenswrapper[5000]: I0105 22:00:53.977813 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:53 crc kubenswrapper[5000]: I0105 22:00:53.979182 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:54 crc kubenswrapper[5000]: I0105 22:00:54.020508 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:54 crc kubenswrapper[5000]: I0105 22:00:54.302695 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:54 crc kubenswrapper[5000]: I0105 22:00:54.356516 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.278462 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b58t5" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="registry-server" containerID="cri-o://ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4" gracePeriod=2 Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.748061 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.881519 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content\") pod \"239a756c-7a26-4ae3-9c69-2b327b785d18\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.881653 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb229\" (UniqueName: \"kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229\") pod \"239a756c-7a26-4ae3-9c69-2b327b785d18\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.882621 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities\") pod \"239a756c-7a26-4ae3-9c69-2b327b785d18\" (UID: \"239a756c-7a26-4ae3-9c69-2b327b785d18\") " Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.883532 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities" (OuterVolumeSpecName: "utilities") pod "239a756c-7a26-4ae3-9c69-2b327b785d18" (UID: "239a756c-7a26-4ae3-9c69-2b327b785d18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.888438 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229" (OuterVolumeSpecName: "kube-api-access-lb229") pod "239a756c-7a26-4ae3-9c69-2b327b785d18" (UID: "239a756c-7a26-4ae3-9c69-2b327b785d18"). InnerVolumeSpecName "kube-api-access-lb229". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.956536 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "239a756c-7a26-4ae3-9c69-2b327b785d18" (UID: "239a756c-7a26-4ae3-9c69-2b327b785d18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.985638 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.985681 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb229\" (UniqueName: \"kubernetes.io/projected/239a756c-7a26-4ae3-9c69-2b327b785d18-kube-api-access-lb229\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:56 crc kubenswrapper[5000]: I0105 22:00:56.985696 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239a756c-7a26-4ae3-9c69-2b327b785d18-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.298737 5000 generic.go:334] "Generic (PLEG): container finished" podID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerID="ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4" exitCode=0 Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.298797 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerDied","Data":"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4"} Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.298843 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58t5" event={"ID":"239a756c-7a26-4ae3-9c69-2b327b785d18","Type":"ContainerDied","Data":"1f2a1eac491b71d949d37dcf425b5d881426357645380b66f2259178e6c950d5"} Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.298838 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58t5" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.298864 5000 scope.go:117] "RemoveContainer" containerID="ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.331872 5000 scope.go:117] "RemoveContainer" containerID="bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.343962 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.352732 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b58t5"] Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.363970 5000 scope.go:117] "RemoveContainer" containerID="15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.397425 5000 scope.go:117] "RemoveContainer" containerID="ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4" Jan 05 22:00:57 crc kubenswrapper[5000]: E0105 22:00:57.397922 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4\": container with ID starting with ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4 not found: ID does not exist" containerID="ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.397971 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4"} err="failed to get container status \"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4\": rpc error: code = NotFound desc = could not find container \"ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4\": container with ID starting with ab7971024c11c9a5c561fac83fead48908e40c2eae1514c1927aafbc38ecc4f4 not found: ID does not exist" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.397999 5000 scope.go:117] "RemoveContainer" containerID="bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad" Jan 05 22:00:57 crc kubenswrapper[5000]: E0105 22:00:57.398416 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad\": container with ID starting with bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad not found: ID does not exist" containerID="bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.398451 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad"} err="failed to get container status \"bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad\": rpc error: code = NotFound desc = could not find container \"bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad\": container with ID starting with bf95cbc2b6836f3fb1f98c975555d4172e4befe25bf9aa8153322e5aa61eb8ad not found: ID does not exist" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.398484 5000 scope.go:117] "RemoveContainer" containerID="15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882" Jan 05 22:00:57 crc kubenswrapper[5000]: E0105 22:00:57.398799 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882\": container with ID starting with 15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882 not found: ID does not exist" containerID="15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882" Jan 05 22:00:57 crc kubenswrapper[5000]: I0105 22:00:57.398826 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882"} err="failed to get container status \"15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882\": rpc error: code = NotFound desc = could not find container \"15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882\": container with ID starting with 15dd700480521b7c04ce8109406fc1bb4c06b290b7ef637fae66014d04289882 not found: ID does not exist" Jan 05 22:00:59 crc kubenswrapper[5000]: I0105 22:00:59.323788 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:00:59 crc kubenswrapper[5000]: E0105 22:00:59.325263 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:00:59 crc kubenswrapper[5000]: I0105 22:00:59.334504 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" path="/var/lib/kubelet/pods/239a756c-7a26-4ae3-9c69-2b327b785d18/volumes" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.156807 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29460841-tkgzh"] Jan 05 22:01:00 crc kubenswrapper[5000]: E0105 22:01:00.157583 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.157604 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[5000]: E0105 22:01:00.157617 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="extract-utilities" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.157627 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="extract-utilities" Jan 05 22:01:00 crc kubenswrapper[5000]: E0105 22:01:00.157658 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="extract-content" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.157666 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="extract-content" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.157909 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="239a756c-7a26-4ae3-9c69-2b327b785d18" containerName="registry-server" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.158634 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.186720 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460841-tkgzh"] Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.257111 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6l8\" (UniqueName: \"kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.257204 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.257226 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.257532 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.359973 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.360090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6l8\" (UniqueName: \"kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.360177 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.360199 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.366504 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.366529 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.376176 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.376277 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6l8\" (UniqueName: \"kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8\") pod \"keystone-cron-29460841-tkgzh\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.485832 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:00 crc kubenswrapper[5000]: I0105 22:01:00.901613 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29460841-tkgzh"] Jan 05 22:01:01 crc kubenswrapper[5000]: I0105 22:01:01.351501 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-tkgzh" event={"ID":"15fb1cfb-41eb-4567-a694-821f1da15b07","Type":"ContainerStarted","Data":"aa50429e4c93b5f08d6927c9f03c475102a6e7d7d325681334d2dafc1047b3d2"} Jan 05 22:01:01 crc kubenswrapper[5000]: I0105 22:01:01.351829 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-tkgzh" event={"ID":"15fb1cfb-41eb-4567-a694-821f1da15b07","Type":"ContainerStarted","Data":"482c566689fb9ad372bb4f9fb79d54b87a0966e9d58c648c6191fcb6ec123e9d"} Jan 05 22:01:01 crc kubenswrapper[5000]: I0105 22:01:01.381626 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29460841-tkgzh" podStartSLOduration=1.381593857 podStartE2EDuration="1.381593857s" podCreationTimestamp="2026-01-05 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:01:01.365482778 +0000 UTC m=+1616.321685257" watchObservedRunningTime="2026-01-05 22:01:01.381593857 +0000 UTC m=+1616.337796336" Jan 05 22:01:03 crc kubenswrapper[5000]: I0105 22:01:03.374643 5000 generic.go:334] "Generic (PLEG): container finished" podID="15fb1cfb-41eb-4567-a694-821f1da15b07" containerID="aa50429e4c93b5f08d6927c9f03c475102a6e7d7d325681334d2dafc1047b3d2" exitCode=0 Jan 05 22:01:03 crc kubenswrapper[5000]: I0105 22:01:03.374947 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-tkgzh" event={"ID":"15fb1cfb-41eb-4567-a694-821f1da15b07","Type":"ContainerDied","Data":"aa50429e4c93b5f08d6927c9f03c475102a6e7d7d325681334d2dafc1047b3d2"} Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.699740 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.866354 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv6l8\" (UniqueName: \"kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8\") pod \"15fb1cfb-41eb-4567-a694-821f1da15b07\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.866466 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle\") pod \"15fb1cfb-41eb-4567-a694-821f1da15b07\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.866526 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys\") pod \"15fb1cfb-41eb-4567-a694-821f1da15b07\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.866568 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data\") pod \"15fb1cfb-41eb-4567-a694-821f1da15b07\" (UID: \"15fb1cfb-41eb-4567-a694-821f1da15b07\") " Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.872265 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15fb1cfb-41eb-4567-a694-821f1da15b07" (UID: "15fb1cfb-41eb-4567-a694-821f1da15b07"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.872440 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8" (OuterVolumeSpecName: "kube-api-access-lv6l8") pod "15fb1cfb-41eb-4567-a694-821f1da15b07" (UID: "15fb1cfb-41eb-4567-a694-821f1da15b07"). InnerVolumeSpecName "kube-api-access-lv6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.893758 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15fb1cfb-41eb-4567-a694-821f1da15b07" (UID: "15fb1cfb-41eb-4567-a694-821f1da15b07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.919701 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data" (OuterVolumeSpecName: "config-data") pod "15fb1cfb-41eb-4567-a694-821f1da15b07" (UID: "15fb1cfb-41eb-4567-a694-821f1da15b07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.968853 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv6l8\" (UniqueName: \"kubernetes.io/projected/15fb1cfb-41eb-4567-a694-821f1da15b07-kube-api-access-lv6l8\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.968904 5000 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.968920 5000 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:04 crc kubenswrapper[5000]: I0105 22:01:04.968931 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15fb1cfb-41eb-4567-a694-821f1da15b07-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:05 crc kubenswrapper[5000]: I0105 22:01:05.389939 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29460841-tkgzh" event={"ID":"15fb1cfb-41eb-4567-a694-821f1da15b07","Type":"ContainerDied","Data":"482c566689fb9ad372bb4f9fb79d54b87a0966e9d58c648c6191fcb6ec123e9d"} Jan 05 22:01:05 crc kubenswrapper[5000]: I0105 22:01:05.390010 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482c566689fb9ad372bb4f9fb79d54b87a0966e9d58c648c6191fcb6ec123e9d" Jan 05 22:01:05 crc kubenswrapper[5000]: I0105 22:01:05.389976 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29460841-tkgzh" Jan 05 22:01:10 crc kubenswrapper[5000]: I0105 22:01:10.323742 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:01:10 crc kubenswrapper[5000]: E0105 22:01:10.324696 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:01:13 crc kubenswrapper[5000]: I0105 22:01:13.036407 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fz42m"] Jan 05 22:01:13 crc kubenswrapper[5000]: I0105 22:01:13.046394 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fz42m"] Jan 05 22:01:13 crc kubenswrapper[5000]: I0105 22:01:13.336371 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51a1013-b3ea-444a-b578-6cfc91b1c283" path="/var/lib/kubelet/pods/c51a1013-b3ea-444a-b578-6cfc91b1c283/volumes" Jan 05 22:01:14 crc kubenswrapper[5000]: E0105 22:01:14.463754 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb1cfb_41eb_4567_a694_821f1da15b07.slice\": RecentStats: unable to find data in memory cache]" Jan 05 22:01:18 crc kubenswrapper[5000]: I0105 22:01:18.779328 5000 scope.go:117] "RemoveContainer" containerID="11891e0e0afb91d8d6fec56e174ac0ea5bd0799295dae78e249ba29b9afb016b" Jan 05 22:01:18 crc kubenswrapper[5000]: I0105 22:01:18.809685 5000 scope.go:117] "RemoveContainer" containerID="25265583ff414d808800f39fda3565ffaa38570825b4c8f313cb7c2cbdb3a374" Jan 05 22:01:18 crc kubenswrapper[5000]: I0105 22:01:18.863936 5000 scope.go:117] "RemoveContainer" containerID="c0b86e428148a8829ec674d8d1c1348f9988c252f570b4589d9023df9df696ab" Jan 05 22:01:18 crc kubenswrapper[5000]: I0105 22:01:18.903641 5000 scope.go:117] "RemoveContainer" containerID="17146eaf4414459be821baa03ee865f4422c3c9fd02929bf18a8fd7cf6b5e1b3" Jan 05 22:01:18 crc kubenswrapper[5000]: I0105 22:01:18.948030 5000 scope.go:117] "RemoveContainer" containerID="1921ce77217983e532707ba4a5e2db0081860093ee5d7ceab50184dbdaeb2591" Jan 05 22:01:19 crc kubenswrapper[5000]: I0105 22:01:19.017412 5000 scope.go:117] "RemoveContainer" containerID="bfbce37f38c34c070eab3490287310fb893446752b76ae0f8ae5033d19bb4284" Jan 05 22:01:19 crc kubenswrapper[5000]: I0105 22:01:19.035040 5000 scope.go:117] "RemoveContainer" containerID="030df051cd17ef123b438baead653693d9fc0bcb2110e627dd98409882142999" Jan 05 22:01:19 crc kubenswrapper[5000]: I0105 22:01:19.061797 5000 scope.go:117] "RemoveContainer" containerID="db3604e0f238a934124a5f33778cd5fd48a0f7de3d0e002a1c744826947f2463" Jan 05 22:01:19 crc kubenswrapper[5000]: I0105 22:01:19.081182 5000 scope.go:117] "RemoveContainer" containerID="80cac4240af458a184a16d566eb066c70df0ce539a9dee0bedb5c89f8f36b75c" Jan 05 22:01:19 crc kubenswrapper[5000]: I0105 22:01:19.109786 5000 scope.go:117] "RemoveContainer" containerID="868be418d5303816019d5ae684f9bcb8a9e2b0fa98e8d4d8a39046a000e97481" Jan 05 22:01:22 crc kubenswrapper[5000]: I0105 22:01:22.324106 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:01:22 crc kubenswrapper[5000]: E0105 22:01:22.324805 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:01:24 crc kubenswrapper[5000]: E0105 22:01:24.878051 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb1cfb_41eb_4567_a694_821f1da15b07.slice\": RecentStats: unable to find data in memory cache]" Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.039220 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nhpcs"] Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.046370 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nhpcs"] Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.054941 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dgtdq"] Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.064384 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dgtdq"] Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.335517 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024cd8c9-c0c9-4f2c-884b-e818c2a95133" path="/var/lib/kubelet/pods/024cd8c9-c0c9-4f2c-884b-e818c2a95133/volumes" Jan 05 22:01:27 crc kubenswrapper[5000]: I0105 22:01:27.336217 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf9d2c1-13d7-4475-a978-9b02ccb6374d" path="/var/lib/kubelet/pods/faf9d2c1-13d7-4475-a978-9b02ccb6374d/volumes" Jan 05 22:01:32 crc kubenswrapper[5000]: I0105 22:01:32.028675 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-65jnl"] Jan 05 22:01:32 crc kubenswrapper[5000]: I0105 22:01:32.039177 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-65jnl"] Jan 05 22:01:33 crc kubenswrapper[5000]: I0105 22:01:33.334218 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce305106-1701-4e2e-b87a-fc358e9c99d2" path="/var/lib/kubelet/pods/ce305106-1701-4e2e-b87a-fc358e9c99d2/volumes" Jan 05 22:01:35 crc kubenswrapper[5000]: E0105 22:01:35.093976 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb1cfb_41eb_4567_a694_821f1da15b07.slice\": RecentStats: unable to find data in memory cache]" Jan 05 22:01:36 crc kubenswrapper[5000]: I0105 22:01:36.324236 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:01:36 crc kubenswrapper[5000]: E0105 22:01:36.324766 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:01:40 crc kubenswrapper[5000]: I0105 22:01:40.045497 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-prdrd"] Jan 05 22:01:40 crc kubenswrapper[5000]: I0105 22:01:40.054266 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-prdrd"] Jan 05 22:01:41 crc kubenswrapper[5000]: I0105 22:01:41.333507 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00" path="/var/lib/kubelet/pods/4c9d2e23-33f0-4563-a4a6-4b2aaf6adf00/volumes" Jan 05 22:01:42 crc kubenswrapper[5000]: I0105 22:01:42.835812 5000 generic.go:334] "Generic (PLEG): container finished" podID="85045115-6f3e-4624-9e9b-0db7e0a6419e" containerID="c9d9f33b59bb1953c755b4a428392deae0bbfbdaf9b34a60298ab779d549dc5c" exitCode=0 Jan 05 22:01:42 crc kubenswrapper[5000]: I0105 22:01:42.835909 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" event={"ID":"85045115-6f3e-4624-9e9b-0db7e0a6419e","Type":"ContainerDied","Data":"c9d9f33b59bb1953c755b4a428392deae0bbfbdaf9b34a60298ab779d549dc5c"} Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.235424 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.344828 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key\") pod \"85045115-6f3e-4624-9e9b-0db7e0a6419e\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.344934 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xkpx\" (UniqueName: \"kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx\") pod \"85045115-6f3e-4624-9e9b-0db7e0a6419e\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.344998 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory\") pod \"85045115-6f3e-4624-9e9b-0db7e0a6419e\" (UID: \"85045115-6f3e-4624-9e9b-0db7e0a6419e\") " Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.350643 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx" (OuterVolumeSpecName: "kube-api-access-7xkpx") pod "85045115-6f3e-4624-9e9b-0db7e0a6419e" (UID: "85045115-6f3e-4624-9e9b-0db7e0a6419e"). InnerVolumeSpecName "kube-api-access-7xkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.377280 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85045115-6f3e-4624-9e9b-0db7e0a6419e" (UID: "85045115-6f3e-4624-9e9b-0db7e0a6419e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.388618 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory" (OuterVolumeSpecName: "inventory") pod "85045115-6f3e-4624-9e9b-0db7e0a6419e" (UID: "85045115-6f3e-4624-9e9b-0db7e0a6419e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.447501 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xkpx\" (UniqueName: \"kubernetes.io/projected/85045115-6f3e-4624-9e9b-0db7e0a6419e-kube-api-access-7xkpx\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.447534 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.447542 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85045115-6f3e-4624-9e9b-0db7e0a6419e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.856926 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" event={"ID":"85045115-6f3e-4624-9e9b-0db7e0a6419e","Type":"ContainerDied","Data":"b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80"} Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.856996 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1da8622025cd1c23c3728bec72f78ba8a21f26f7828142326e07d146fb1ad80" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.856954 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.937266 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6"] Jan 05 22:01:44 crc kubenswrapper[5000]: E0105 22:01:44.937858 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85045115-6f3e-4624-9e9b-0db7e0a6419e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.937950 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="85045115-6f3e-4624-9e9b-0db7e0a6419e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:44 crc kubenswrapper[5000]: E0105 22:01:44.938044 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fb1cfb-41eb-4567-a694-821f1da15b07" containerName="keystone-cron" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.938121 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fb1cfb-41eb-4567-a694-821f1da15b07" containerName="keystone-cron" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.938376 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fb1cfb-41eb-4567-a694-821f1da15b07" containerName="keystone-cron" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.938463 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="85045115-6f3e-4624-9e9b-0db7e0a6419e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.939494 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.943231 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.943619 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.943838 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.943969 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.949323 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6"] Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.991304 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhpj\" (UniqueName: \"kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.991404 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:44 crc kubenswrapper[5000]: I0105 22:01:44.991567 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.094070 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.094142 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhpj\" (UniqueName: \"kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.094224 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.097587 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.097632 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.113847 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhpj\" (UniqueName: \"kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.296476 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:45 crc kubenswrapper[5000]: E0105 22:01:45.306202 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb1cfb_41eb_4567_a694_821f1da15b07.slice\": RecentStats: unable to find data in memory cache]" Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.826259 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6"] Jan 05 22:01:45 crc kubenswrapper[5000]: I0105 22:01:45.870786 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" event={"ID":"7cff51b1-fa8c-43c0-8563-b83e0b4542cb","Type":"ContainerStarted","Data":"2477c9105422f73da3f2f1b54aa09cd10b7366e85c0b65513bba533bf50053e2"} Jan 05 22:01:46 crc kubenswrapper[5000]: I0105 22:01:46.881267 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" event={"ID":"7cff51b1-fa8c-43c0-8563-b83e0b4542cb","Type":"ContainerStarted","Data":"9dddb82e6a26e5e384eac128a5467f2aa72462d8d18141fda6725aba2ea9ed1f"} Jan 05 22:01:46 crc kubenswrapper[5000]: I0105 22:01:46.896968 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" podStartSLOduration=2.332109138 podStartE2EDuration="2.896949513s" podCreationTimestamp="2026-01-05 22:01:44 +0000 UTC" firstStartedPulling="2026-01-05 22:01:45.832954975 +0000 UTC m=+1660.789157444" lastFinishedPulling="2026-01-05 22:01:46.39779535 +0000 UTC m=+1661.353997819" observedRunningTime="2026-01-05 22:01:46.894901955 +0000 UTC m=+1661.851104424" watchObservedRunningTime="2026-01-05 22:01:46.896949513 +0000 UTC m=+1661.853151992" Jan 05 22:01:50 crc kubenswrapper[5000]: I0105 22:01:50.323868 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:01:50 crc kubenswrapper[5000]: E0105 22:01:50.324791 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:01:51 crc kubenswrapper[5000]: I0105 22:01:51.928722 5000 generic.go:334] "Generic (PLEG): container finished" podID="7cff51b1-fa8c-43c0-8563-b83e0b4542cb" containerID="9dddb82e6a26e5e384eac128a5467f2aa72462d8d18141fda6725aba2ea9ed1f" exitCode=0 Jan 05 22:01:51 crc kubenswrapper[5000]: I0105 22:01:51.928815 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" event={"ID":"7cff51b1-fa8c-43c0-8563-b83e0b4542cb","Type":"ContainerDied","Data":"9dddb82e6a26e5e384eac128a5467f2aa72462d8d18141fda6725aba2ea9ed1f"} Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.374726 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.575262 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhpj\" (UniqueName: \"kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj\") pod \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.575376 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key\") pod \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.575473 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory\") pod \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\" (UID: \"7cff51b1-fa8c-43c0-8563-b83e0b4542cb\") " Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.581198 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj" (OuterVolumeSpecName: "kube-api-access-gnhpj") pod "7cff51b1-fa8c-43c0-8563-b83e0b4542cb" (UID: "7cff51b1-fa8c-43c0-8563-b83e0b4542cb"). InnerVolumeSpecName "kube-api-access-gnhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.599441 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory" (OuterVolumeSpecName: "inventory") pod "7cff51b1-fa8c-43c0-8563-b83e0b4542cb" (UID: "7cff51b1-fa8c-43c0-8563-b83e0b4542cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.622870 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7cff51b1-fa8c-43c0-8563-b83e0b4542cb" (UID: "7cff51b1-fa8c-43c0-8563-b83e0b4542cb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.678640 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.678675 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhpj\" (UniqueName: \"kubernetes.io/projected/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-kube-api-access-gnhpj\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.678687 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cff51b1-fa8c-43c0-8563-b83e0b4542cb-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.959858 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" event={"ID":"7cff51b1-fa8c-43c0-8563-b83e0b4542cb","Type":"ContainerDied","Data":"2477c9105422f73da3f2f1b54aa09cd10b7366e85c0b65513bba533bf50053e2"} Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.959948 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2477c9105422f73da3f2f1b54aa09cd10b7366e85c0b65513bba533bf50053e2" Jan 05 22:01:53 crc kubenswrapper[5000]: I0105 22:01:53.960012 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.036661 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv"] Jan 05 22:01:54 crc kubenswrapper[5000]: E0105 22:01:54.037118 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff51b1-fa8c-43c0-8563-b83e0b4542cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.037137 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff51b1-fa8c-43c0-8563-b83e0b4542cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.037323 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cff51b1-fa8c-43c0-8563-b83e0b4542cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.038056 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.045328 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.045465 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.045671 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.049723 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.075006 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv"] Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.086425 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwn5\" (UniqueName: \"kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.086566 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.086705 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.188507 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwn5\" (UniqueName: \"kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.188584 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.188619 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.193307 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.193435 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.204760 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwn5\" (UniqueName: \"kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qkwpv\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.372320 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.890563 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv"] Jan 05 22:01:54 crc kubenswrapper[5000]: I0105 22:01:54.968755 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" event={"ID":"7b55f097-bc7e-471e-88de-725221c23439","Type":"ContainerStarted","Data":"459e7dae738b39f9ce2e21e97c7f92cf7eaf8ee0ba2e4af1e834f0c1068f62d5"} Jan 05 22:01:55 crc kubenswrapper[5000]: E0105 22:01:55.571463 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb1cfb_41eb_4567_a694_821f1da15b07.slice\": RecentStats: unable to find data in memory cache]" Jan 05 22:01:55 crc kubenswrapper[5000]: I0105 22:01:55.978042 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" event={"ID":"7b55f097-bc7e-471e-88de-725221c23439","Type":"ContainerStarted","Data":"945993d3f8dc590bb087d153913d9d10ceac180f66a797e37d77efdff8433717"} Jan 05 22:01:55 crc kubenswrapper[5000]: I0105 22:01:55.998809 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" podStartSLOduration=1.438988165 podStartE2EDuration="1.998787846s" podCreationTimestamp="2026-01-05 22:01:54 +0000 UTC" firstStartedPulling="2026-01-05 22:01:54.895737095 +0000 UTC m=+1669.851939564" lastFinishedPulling="2026-01-05 22:01:55.455536776 +0000 UTC m=+1670.411739245" observedRunningTime="2026-01-05 22:01:55.989847191 +0000 UTC m=+1670.946049660" watchObservedRunningTime="2026-01-05 22:01:55.998787846 +0000 UTC m=+1670.954990315" Jan 05 22:02:05 crc kubenswrapper[5000]: I0105 22:02:05.332415 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:02:05 crc kubenswrapper[5000]: E0105 22:02:05.333231 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:02:19 crc kubenswrapper[5000]: I0105 22:02:19.320983 5000 scope.go:117] "RemoveContainer" containerID="40841333e4b54fe6fc4f59ad43e255090115c78dc987071d469c8633bd7bfedf" Jan 05 22:02:19 crc kubenswrapper[5000]: I0105 22:02:19.323615 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:02:19 crc kubenswrapper[5000]: E0105 22:02:19.324008 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:02:19 crc kubenswrapper[5000]: I0105 22:02:19.391691 5000 scope.go:117] "RemoveContainer" containerID="e9878eec8a7e6ee4dd26f7f70b81e1fa8913d0561e1def3e2f1a80e441fe135c" Jan 05 22:02:19 crc kubenswrapper[5000]: I0105 22:02:19.442953 5000 scope.go:117] "RemoveContainer" containerID="8ec434cc706954296e05fd728532e26a869367de103707bea5a589d46cb52d25" Jan 05 22:02:19 crc kubenswrapper[5000]: I0105 22:02:19.468772 5000 scope.go:117] "RemoveContainer" containerID="b40da413314869d5c3919d9fe0fde2042e8de15106cae3e0fbbd80911738984e" Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.052580 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jbj8l"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.067292 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7721-account-create-update-2dgl4"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.089062 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p89b7"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.102284 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4112-account-create-update-lq2nf"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.113232 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2d8a-account-create-update-qv4nh"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.122488 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p89b7"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.132030 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jbj8l"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.142424 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7721-account-create-update-2dgl4"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.152494 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2d8a-account-create-update-qv4nh"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.164789 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4112-account-create-update-lq2nf"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.175933 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6tqm5"] Jan 05 22:02:20 crc kubenswrapper[5000]: I0105 22:02:20.184737 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6tqm5"] Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.333509 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f9ee45-7624-4137-aab8-7e6896acc26d" path="/var/lib/kubelet/pods/16f9ee45-7624-4137-aab8-7e6896acc26d/volumes" Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.334378 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fa678c-1863-4d63-8dde-0b3a03e1bfa5" path="/var/lib/kubelet/pods/25fa678c-1863-4d63-8dde-0b3a03e1bfa5/volumes" Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.334931 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cad663-dcd9-47f8-ace9-a6376185c4e2" path="/var/lib/kubelet/pods/53cad663-dcd9-47f8-ace9-a6376185c4e2/volumes" Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.335453 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a60e549-085d-42d0-baf7-df73fd417a77" path="/var/lib/kubelet/pods/8a60e549-085d-42d0-baf7-df73fd417a77/volumes" Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.336457 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f173d560-1627-41c6-a033-c1c58cc63647" path="/var/lib/kubelet/pods/f173d560-1627-41c6-a033-c1c58cc63647/volumes" Jan 05 22:02:21 crc kubenswrapper[5000]: I0105 22:02:21.337148 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f386365d-31bf-463a-92d3-6b81c90b7786" path="/var/lib/kubelet/pods/f386365d-31bf-463a-92d3-6b81c90b7786/volumes" Jan 05 22:02:31 crc kubenswrapper[5000]: I0105 22:02:31.324409 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:02:31 crc kubenswrapper[5000]: E0105 22:02:31.325057 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:02:32 crc kubenswrapper[5000]: I0105 22:02:32.871938 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b55f097-bc7e-471e-88de-725221c23439" containerID="945993d3f8dc590bb087d153913d9d10ceac180f66a797e37d77efdff8433717" exitCode=0 Jan 05 22:02:32 crc kubenswrapper[5000]: I0105 22:02:32.871984 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" event={"ID":"7b55f097-bc7e-471e-88de-725221c23439","Type":"ContainerDied","Data":"945993d3f8dc590bb087d153913d9d10ceac180f66a797e37d77efdff8433717"} Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.245209 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.312867 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key\") pod \"7b55f097-bc7e-471e-88de-725221c23439\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.312963 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory\") pod \"7b55f097-bc7e-471e-88de-725221c23439\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.313066 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpwn5\" (UniqueName: \"kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5\") pod \"7b55f097-bc7e-471e-88de-725221c23439\" (UID: \"7b55f097-bc7e-471e-88de-725221c23439\") " Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.319136 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5" (OuterVolumeSpecName: "kube-api-access-lpwn5") pod "7b55f097-bc7e-471e-88de-725221c23439" (UID: "7b55f097-bc7e-471e-88de-725221c23439"). InnerVolumeSpecName "kube-api-access-lpwn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.340740 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory" (OuterVolumeSpecName: "inventory") pod "7b55f097-bc7e-471e-88de-725221c23439" (UID: "7b55f097-bc7e-471e-88de-725221c23439"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.342025 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b55f097-bc7e-471e-88de-725221c23439" (UID: "7b55f097-bc7e-471e-88de-725221c23439"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.415432 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.415737 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b55f097-bc7e-471e-88de-725221c23439-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.415750 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpwn5\" (UniqueName: \"kubernetes.io/projected/7b55f097-bc7e-471e-88de-725221c23439-kube-api-access-lpwn5\") on node \"crc\" DevicePath \"\"" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.888444 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" event={"ID":"7b55f097-bc7e-471e-88de-725221c23439","Type":"ContainerDied","Data":"459e7dae738b39f9ce2e21e97c7f92cf7eaf8ee0ba2e4af1e834f0c1068f62d5"} Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.888510 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459e7dae738b39f9ce2e21e97c7f92cf7eaf8ee0ba2e4af1e834f0c1068f62d5" Jan 05 22:02:34 crc kubenswrapper[5000]: I0105 22:02:34.888577 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qkwpv" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.057094 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp"] Jan 05 22:02:35 crc kubenswrapper[5000]: E0105 22:02:35.057652 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b55f097-bc7e-471e-88de-725221c23439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.057721 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b55f097-bc7e-471e-88de-725221c23439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.057956 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b55f097-bc7e-471e-88de-725221c23439" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.058627 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.060617 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.061351 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.061373 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.061382 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.078044 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp"] Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.129226 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.129352 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8r7j\" (UniqueName: \"kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.129390 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.230543 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.230678 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8r7j\" (UniqueName: \"kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.230723 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.235827 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.235827 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.248862 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8r7j\" (UniqueName: \"kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.424778 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:02:35 crc kubenswrapper[5000]: I0105 22:02:35.930784 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp"] Jan 05 22:02:36 crc kubenswrapper[5000]: I0105 22:02:36.906796 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" event={"ID":"83978ac1-3e0e-40e4-9009-0be10125c3a0","Type":"ContainerStarted","Data":"a3466a0ab7f6372f509270b0e09f47ca17518f92dfcc8dc60591f08da0a87241"} Jan 05 22:02:36 crc kubenswrapper[5000]: I0105 22:02:36.907087 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" event={"ID":"83978ac1-3e0e-40e4-9009-0be10125c3a0","Type":"ContainerStarted","Data":"ff6f10b88c2a4ad4b95e8d524f5647dbad91621d2bf9f19fe228094952115aff"} Jan 05 22:02:36 crc kubenswrapper[5000]: I0105 22:02:36.936224 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" podStartSLOduration=1.413879922 podStartE2EDuration="1.936203195s" podCreationTimestamp="2026-01-05 22:02:35 +0000 UTC" firstStartedPulling="2026-01-05 22:02:35.937201119 +0000 UTC m=+1710.893403588" lastFinishedPulling="2026-01-05 22:02:36.459524402 +0000 UTC m=+1711.415726861" observedRunningTime="2026-01-05 22:02:36.922747892 +0000 UTC m=+1711.878950401" watchObservedRunningTime="2026-01-05 22:02:36.936203195 +0000 UTC m=+1711.892405684" Jan 05 22:02:43 crc kubenswrapper[5000]: I0105 22:02:43.324362 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:02:43 crc kubenswrapper[5000]: E0105 22:02:43.325171 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:02:46 crc kubenswrapper[5000]: I0105 22:02:46.046795 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qtjzr"] Jan 05 22:02:46 crc kubenswrapper[5000]: I0105 22:02:46.057992 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qtjzr"] Jan 05 22:02:47 crc kubenswrapper[5000]: I0105 22:02:47.334977 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadbba38-e7c5-464a-99d9-7895875ab04b" path="/var/lib/kubelet/pods/fadbba38-e7c5-464a-99d9-7895875ab04b/volumes" Jan 05 22:02:57 crc kubenswrapper[5000]: I0105 22:02:57.324095 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:02:57 crc kubenswrapper[5000]: E0105 22:02:57.324947 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:03:08 crc kubenswrapper[5000]: I0105 22:03:08.323837 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:03:08 crc kubenswrapper[5000]: E0105 22:03:08.324714 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.028161 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-sxtrz"] Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.036033 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8mr65"] Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.043656 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-sxtrz"] Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.053932 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8mr65"] Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.333231 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b371d36-3b35-4109-965b-98343703594b" path="/var/lib/kubelet/pods/5b371d36-3b35-4109-965b-98343703594b/volumes" Jan 05 22:03:09 crc kubenswrapper[5000]: I0105 22:03:09.333767 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656c76b1-9f0a-4d3c-8a5b-dc5e823b8641" path="/var/lib/kubelet/pods/656c76b1-9f0a-4d3c-8a5b-dc5e823b8641/volumes" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.608790 5000 scope.go:117] "RemoveContainer" containerID="2b739df2687959e1e0a43aa3cbce7c1c695b06cb81dfcf906d16a45782ca5d0b" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.632562 5000 scope.go:117] "RemoveContainer" containerID="244da10ebe95372f1f4adde66b28a69c402733a8fa2affe052e5e13695766764" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.705564 5000 scope.go:117] "RemoveContainer" containerID="4cb9561782447b7d5e3a3f65af9f0601af25b11c187cb16b76f6b811ae82cd8e" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.775056 5000 scope.go:117] "RemoveContainer" containerID="1fa816145ff9efc4595553cdb6b93d241ebf70062d65280242035298a33c791e" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.822433 5000 scope.go:117] "RemoveContainer" containerID="301cfa2fcd0e82be3a7cc924d90d5db5aeba6fd1805f067a483271cd1d9b8146" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.850737 5000 scope.go:117] "RemoveContainer" containerID="72b8de78bc990b250edced9fbf21e3e60192f17c5cba1c147812e64a79868ee1" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.887630 5000 scope.go:117] "RemoveContainer" containerID="7a0516c4dabca8f67371b346afd266261167f48564db9179cb52c1b48c873876" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.907787 5000 scope.go:117] "RemoveContainer" containerID="215b27dfcaa871064ed3737bef3a54e03624499cd43beb7f21f5d5d92ae2250a" Jan 05 22:03:19 crc kubenswrapper[5000]: I0105 22:03:19.954963 5000 scope.go:117] "RemoveContainer" containerID="809ff57659189c0aa7b9a59f8700d7f6a156a99880e466cb9eadff0bf3fc511a" Jan 05 22:03:22 crc kubenswrapper[5000]: I0105 22:03:22.323964 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:03:22 crc kubenswrapper[5000]: E0105 22:03:22.324540 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:03:26 crc kubenswrapper[5000]: I0105 22:03:26.385357 5000 generic.go:334] "Generic (PLEG): container finished" podID="83978ac1-3e0e-40e4-9009-0be10125c3a0" containerID="a3466a0ab7f6372f509270b0e09f47ca17518f92dfcc8dc60591f08da0a87241" exitCode=0 Jan 05 22:03:26 crc kubenswrapper[5000]: I0105 22:03:26.385489 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" event={"ID":"83978ac1-3e0e-40e4-9009-0be10125c3a0","Type":"ContainerDied","Data":"a3466a0ab7f6372f509270b0e09f47ca17518f92dfcc8dc60591f08da0a87241"} Jan 05 22:03:27 crc kubenswrapper[5000]: I0105 22:03:27.789309 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:03:27 crc kubenswrapper[5000]: I0105 22:03:27.974031 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8r7j\" (UniqueName: \"kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j\") pod \"83978ac1-3e0e-40e4-9009-0be10125c3a0\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " Jan 05 22:03:27 crc kubenswrapper[5000]: I0105 22:03:27.974121 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key\") pod \"83978ac1-3e0e-40e4-9009-0be10125c3a0\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " Jan 05 22:03:27 crc kubenswrapper[5000]: I0105 22:03:27.974270 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory\") pod \"83978ac1-3e0e-40e4-9009-0be10125c3a0\" (UID: \"83978ac1-3e0e-40e4-9009-0be10125c3a0\") " Jan 05 22:03:27 crc kubenswrapper[5000]: I0105 22:03:27.979791 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j" (OuterVolumeSpecName: "kube-api-access-k8r7j") pod "83978ac1-3e0e-40e4-9009-0be10125c3a0" (UID: "83978ac1-3e0e-40e4-9009-0be10125c3a0"). InnerVolumeSpecName "kube-api-access-k8r7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.001997 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "83978ac1-3e0e-40e4-9009-0be10125c3a0" (UID: "83978ac1-3e0e-40e4-9009-0be10125c3a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.004554 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory" (OuterVolumeSpecName: "inventory") pod "83978ac1-3e0e-40e4-9009-0be10125c3a0" (UID: "83978ac1-3e0e-40e4-9009-0be10125c3a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.076981 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.077017 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8r7j\" (UniqueName: \"kubernetes.io/projected/83978ac1-3e0e-40e4-9009-0be10125c3a0-kube-api-access-k8r7j\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.077031 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83978ac1-3e0e-40e4-9009-0be10125c3a0-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.401755 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" event={"ID":"83978ac1-3e0e-40e4-9009-0be10125c3a0","Type":"ContainerDied","Data":"ff6f10b88c2a4ad4b95e8d524f5647dbad91621d2bf9f19fe228094952115aff"} Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.401790 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff6f10b88c2a4ad4b95e8d524f5647dbad91621d2bf9f19fe228094952115aff" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.401833 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.497040 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n2m7c"] Jan 05 22:03:28 crc kubenswrapper[5000]: E0105 22:03:28.497517 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83978ac1-3e0e-40e4-9009-0be10125c3a0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.497547 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="83978ac1-3e0e-40e4-9009-0be10125c3a0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.497838 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="83978ac1-3e0e-40e4-9009-0be10125c3a0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.498714 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.501452 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.502009 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.502478 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.502666 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.509254 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n2m7c"] Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.685546 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.685595 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdvv\" (UniqueName: \"kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.685685 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.787167 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.787230 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdvv\" (UniqueName: \"kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.787316 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.792617 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.794446 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.810972 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdvv\" (UniqueName: \"kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv\") pod \"ssh-known-hosts-edpm-deployment-n2m7c\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:28 crc kubenswrapper[5000]: I0105 22:03:28.817930 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:29 crc kubenswrapper[5000]: I0105 22:03:29.166786 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n2m7c"] Jan 05 22:03:29 crc kubenswrapper[5000]: I0105 22:03:29.412636 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" event={"ID":"c816069b-4834-4cf8-ada8-c7bf3d339ba2","Type":"ContainerStarted","Data":"3fddf0967107c46f9f2cfeb957ee10a63fdd6da3b534ad661bb6d37f3e4e330e"} Jan 05 22:03:30 crc kubenswrapper[5000]: I0105 22:03:30.424526 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" event={"ID":"c816069b-4834-4cf8-ada8-c7bf3d339ba2","Type":"ContainerStarted","Data":"e69d0a64afe48bfd1aaf4315dc6dc20c6896b7906fe8a872f48dd3190d367528"} Jan 05 22:03:30 crc kubenswrapper[5000]: I0105 22:03:30.450699 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" podStartSLOduration=2.011727535 podStartE2EDuration="2.450676021s" podCreationTimestamp="2026-01-05 22:03:28 +0000 UTC" firstStartedPulling="2026-01-05 22:03:29.171499573 +0000 UTC m=+1764.127702052" lastFinishedPulling="2026-01-05 22:03:29.610448069 +0000 UTC m=+1764.566650538" observedRunningTime="2026-01-05 22:03:30.441563802 +0000 UTC m=+1765.397766301" watchObservedRunningTime="2026-01-05 22:03:30.450676021 +0000 UTC m=+1765.406878500" Jan 05 22:03:35 crc kubenswrapper[5000]: I0105 22:03:35.329037 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:03:35 crc kubenswrapper[5000]: E0105 22:03:35.330005 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:03:37 crc kubenswrapper[5000]: I0105 22:03:37.503825 5000 generic.go:334] "Generic (PLEG): container finished" podID="c816069b-4834-4cf8-ada8-c7bf3d339ba2" containerID="e69d0a64afe48bfd1aaf4315dc6dc20c6896b7906fe8a872f48dd3190d367528" exitCode=0 Jan 05 22:03:37 crc kubenswrapper[5000]: I0105 22:03:37.504133 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" event={"ID":"c816069b-4834-4cf8-ada8-c7bf3d339ba2","Type":"ContainerDied","Data":"e69d0a64afe48bfd1aaf4315dc6dc20c6896b7906fe8a872f48dd3190d367528"} Jan 05 22:03:38 crc kubenswrapper[5000]: I0105 22:03:38.912447 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.109916 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdvv\" (UniqueName: \"kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv\") pod \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.110031 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0\") pod \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.110112 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam\") pod \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\" (UID: \"c816069b-4834-4cf8-ada8-c7bf3d339ba2\") " Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.115307 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv" (OuterVolumeSpecName: "kube-api-access-4xdvv") pod "c816069b-4834-4cf8-ada8-c7bf3d339ba2" (UID: "c816069b-4834-4cf8-ada8-c7bf3d339ba2"). InnerVolumeSpecName "kube-api-access-4xdvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.140158 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c816069b-4834-4cf8-ada8-c7bf3d339ba2" (UID: "c816069b-4834-4cf8-ada8-c7bf3d339ba2"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.147636 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c816069b-4834-4cf8-ada8-c7bf3d339ba2" (UID: "c816069b-4834-4cf8-ada8-c7bf3d339ba2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.212504 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdvv\" (UniqueName: \"kubernetes.io/projected/c816069b-4834-4cf8-ada8-c7bf3d339ba2-kube-api-access-4xdvv\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.212542 5000 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.212555 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c816069b-4834-4cf8-ada8-c7bf3d339ba2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.521670 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" event={"ID":"c816069b-4834-4cf8-ada8-c7bf3d339ba2","Type":"ContainerDied","Data":"3fddf0967107c46f9f2cfeb957ee10a63fdd6da3b534ad661bb6d37f3e4e330e"} Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.521710 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fddf0967107c46f9f2cfeb957ee10a63fdd6da3b534ad661bb6d37f3e4e330e" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.521716 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n2m7c" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.581165 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9"] Jan 05 22:03:39 crc kubenswrapper[5000]: E0105 22:03:39.581529 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c816069b-4834-4cf8-ada8-c7bf3d339ba2" containerName="ssh-known-hosts-edpm-deployment" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.581549 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c816069b-4834-4cf8-ada8-c7bf3d339ba2" containerName="ssh-known-hosts-edpm-deployment" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.581742 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c816069b-4834-4cf8-ada8-c7bf3d339ba2" containerName="ssh-known-hosts-edpm-deployment" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.582287 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.585520 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.585692 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.586139 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.586249 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.592421 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9"] Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.720167 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.720291 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62d4t\" (UniqueName: \"kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.720323 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.821593 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62d4t\" (UniqueName: \"kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.821639 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.821730 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.826344 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.832357 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.842077 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62d4t\" (UniqueName: \"kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t76t9\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:39 crc kubenswrapper[5000]: I0105 22:03:39.901404 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:40 crc kubenswrapper[5000]: I0105 22:03:40.396217 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9"] Jan 05 22:03:40 crc kubenswrapper[5000]: I0105 22:03:40.548248 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" event={"ID":"500728b5-6ea6-4696-b63d-36d1a1c64cce","Type":"ContainerStarted","Data":"1bfaa110f9394df53b6e79e661b285843172329015550b2cf5d06e48d8181f6a"} Jan 05 22:03:41 crc kubenswrapper[5000]: I0105 22:03:41.557406 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" event={"ID":"500728b5-6ea6-4696-b63d-36d1a1c64cce","Type":"ContainerStarted","Data":"ead23b997d6af755a6f1cec0793ab8d1838186de34e436107a34d71f2013cc03"} Jan 05 22:03:41 crc kubenswrapper[5000]: I0105 22:03:41.586799 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" podStartSLOduration=2.147103801 podStartE2EDuration="2.586777599s" podCreationTimestamp="2026-01-05 22:03:39 +0000 UTC" firstStartedPulling="2026-01-05 22:03:40.40801216 +0000 UTC m=+1775.364214649" lastFinishedPulling="2026-01-05 22:03:40.847685988 +0000 UTC m=+1775.803888447" observedRunningTime="2026-01-05 22:03:41.574914111 +0000 UTC m=+1776.531116600" watchObservedRunningTime="2026-01-05 22:03:41.586777599 +0000 UTC m=+1776.542980068" Jan 05 22:03:49 crc kubenswrapper[5000]: I0105 22:03:49.324088 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:03:49 crc kubenswrapper[5000]: E0105 22:03:49.324862 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:03:49 crc kubenswrapper[5000]: I0105 22:03:49.638948 5000 generic.go:334] "Generic (PLEG): container finished" podID="500728b5-6ea6-4696-b63d-36d1a1c64cce" containerID="ead23b997d6af755a6f1cec0793ab8d1838186de34e436107a34d71f2013cc03" exitCode=0 Jan 05 22:03:49 crc kubenswrapper[5000]: I0105 22:03:49.639004 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" event={"ID":"500728b5-6ea6-4696-b63d-36d1a1c64cce","Type":"ContainerDied","Data":"ead23b997d6af755a6f1cec0793ab8d1838186de34e436107a34d71f2013cc03"} Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.048590 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.146909 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62d4t\" (UniqueName: \"kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t\") pod \"500728b5-6ea6-4696-b63d-36d1a1c64cce\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.147049 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory\") pod \"500728b5-6ea6-4696-b63d-36d1a1c64cce\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.147104 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key\") pod \"500728b5-6ea6-4696-b63d-36d1a1c64cce\" (UID: \"500728b5-6ea6-4696-b63d-36d1a1c64cce\") " Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.152822 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t" (OuterVolumeSpecName: "kube-api-access-62d4t") pod "500728b5-6ea6-4696-b63d-36d1a1c64cce" (UID: "500728b5-6ea6-4696-b63d-36d1a1c64cce"). InnerVolumeSpecName "kube-api-access-62d4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.172626 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "500728b5-6ea6-4696-b63d-36d1a1c64cce" (UID: "500728b5-6ea6-4696-b63d-36d1a1c64cce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.190960 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory" (OuterVolumeSpecName: "inventory") pod "500728b5-6ea6-4696-b63d-36d1a1c64cce" (UID: "500728b5-6ea6-4696-b63d-36d1a1c64cce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.248827 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62d4t\" (UniqueName: \"kubernetes.io/projected/500728b5-6ea6-4696-b63d-36d1a1c64cce-kube-api-access-62d4t\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.248860 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.248871 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/500728b5-6ea6-4696-b63d-36d1a1c64cce-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.658495 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" event={"ID":"500728b5-6ea6-4696-b63d-36d1a1c64cce","Type":"ContainerDied","Data":"1bfaa110f9394df53b6e79e661b285843172329015550b2cf5d06e48d8181f6a"} Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.658533 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfaa110f9394df53b6e79e661b285843172329015550b2cf5d06e48d8181f6a" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.658621 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t76t9" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.725064 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2"] Jan 05 22:03:51 crc kubenswrapper[5000]: E0105 22:03:51.725440 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500728b5-6ea6-4696-b63d-36d1a1c64cce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.725461 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="500728b5-6ea6-4696-b63d-36d1a1c64cce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.725620 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="500728b5-6ea6-4696-b63d-36d1a1c64cce" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.726180 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.728831 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.728927 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.728944 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.729027 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.740624 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2"] Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.756696 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhf8\" (UniqueName: \"kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.756755 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.756857 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.858479 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhf8\" (UniqueName: \"kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.858856 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.858917 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.868083 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.869435 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:51 crc kubenswrapper[5000]: I0105 22:03:51.875680 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhf8\" (UniqueName: \"kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:52 crc kubenswrapper[5000]: I0105 22:03:52.049066 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:03:52 crc kubenswrapper[5000]: I0105 22:03:52.603176 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2"] Jan 05 22:03:52 crc kubenswrapper[5000]: I0105 22:03:52.667623 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" event={"ID":"b441855d-0224-48d7-b39e-0930dbd9d1d5","Type":"ContainerStarted","Data":"a94249647df48f47ccfe4a06f37f789dc8670eab453fe704e39c757c0d124128"} Jan 05 22:03:53 crc kubenswrapper[5000]: I0105 22:03:53.036462 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hdtf"] Jan 05 22:03:53 crc kubenswrapper[5000]: I0105 22:03:53.044200 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hdtf"] Jan 05 22:03:53 crc kubenswrapper[5000]: I0105 22:03:53.336191 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e147c3d-cd84-4850-8ccc-9bd2c85c848a" path="/var/lib/kubelet/pods/2e147c3d-cd84-4850-8ccc-9bd2c85c848a/volumes" Jan 05 22:03:53 crc kubenswrapper[5000]: I0105 22:03:53.678171 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" event={"ID":"b441855d-0224-48d7-b39e-0930dbd9d1d5","Type":"ContainerStarted","Data":"c810c3f8c3b6f528a90c4a70e342f4eb1bb57b35dfad2bd596153815c9fc8da9"} Jan 05 22:03:53 crc kubenswrapper[5000]: I0105 22:03:53.711553 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" podStartSLOduration=2.113542478 podStartE2EDuration="2.711531877s" podCreationTimestamp="2026-01-05 22:03:51 +0000 UTC" firstStartedPulling="2026-01-05 22:03:52.609042722 +0000 UTC m=+1787.565245201" lastFinishedPulling="2026-01-05 22:03:53.207032131 +0000 UTC m=+1788.163234600" observedRunningTime="2026-01-05 22:03:53.694289646 +0000 UTC m=+1788.650492135" watchObservedRunningTime="2026-01-05 22:03:53.711531877 +0000 UTC m=+1788.667734356" Jan 05 22:04:01 crc kubenswrapper[5000]: I0105 22:04:01.324352 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:04:01 crc kubenswrapper[5000]: E0105 22:04:01.324929 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:04:02 crc kubenswrapper[5000]: I0105 22:04:02.742832 5000 generic.go:334] "Generic (PLEG): container finished" podID="b441855d-0224-48d7-b39e-0930dbd9d1d5" containerID="c810c3f8c3b6f528a90c4a70e342f4eb1bb57b35dfad2bd596153815c9fc8da9" exitCode=0 Jan 05 22:04:02 crc kubenswrapper[5000]: I0105 22:04:02.743146 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" event={"ID":"b441855d-0224-48d7-b39e-0930dbd9d1d5","Type":"ContainerDied","Data":"c810c3f8c3b6f528a90c4a70e342f4eb1bb57b35dfad2bd596153815c9fc8da9"} Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.248466 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.448716 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjhf8\" (UniqueName: \"kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8\") pod \"b441855d-0224-48d7-b39e-0930dbd9d1d5\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.449149 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key\") pod \"b441855d-0224-48d7-b39e-0930dbd9d1d5\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.449483 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory\") pod \"b441855d-0224-48d7-b39e-0930dbd9d1d5\" (UID: \"b441855d-0224-48d7-b39e-0930dbd9d1d5\") " Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.455087 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8" (OuterVolumeSpecName: "kube-api-access-qjhf8") pod "b441855d-0224-48d7-b39e-0930dbd9d1d5" (UID: "b441855d-0224-48d7-b39e-0930dbd9d1d5"). InnerVolumeSpecName "kube-api-access-qjhf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.476082 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory" (OuterVolumeSpecName: "inventory") pod "b441855d-0224-48d7-b39e-0930dbd9d1d5" (UID: "b441855d-0224-48d7-b39e-0930dbd9d1d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.478435 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b441855d-0224-48d7-b39e-0930dbd9d1d5" (UID: "b441855d-0224-48d7-b39e-0930dbd9d1d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.550732 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.550937 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjhf8\" (UniqueName: \"kubernetes.io/projected/b441855d-0224-48d7-b39e-0930dbd9d1d5-kube-api-access-qjhf8\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.551004 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b441855d-0224-48d7-b39e-0930dbd9d1d5-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.760788 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" event={"ID":"b441855d-0224-48d7-b39e-0930dbd9d1d5","Type":"ContainerDied","Data":"a94249647df48f47ccfe4a06f37f789dc8670eab453fe704e39c757c0d124128"} Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.760831 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.760835 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94249647df48f47ccfe4a06f37f789dc8670eab453fe704e39c757c0d124128" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.841199 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv"] Jan 05 22:04:04 crc kubenswrapper[5000]: E0105 22:04:04.841639 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b441855d-0224-48d7-b39e-0930dbd9d1d5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.841660 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b441855d-0224-48d7-b39e-0930dbd9d1d5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.841871 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b441855d-0224-48d7-b39e-0930dbd9d1d5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.842574 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.845700 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.845738 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.845971 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.846026 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.846726 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.847035 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.847094 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.847590 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.859315 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv"] Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958115 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958164 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958183 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958207 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrbf\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958633 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958755 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958801 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958824 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958860 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958882 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958954 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.958979 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.959043 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:04 crc kubenswrapper[5000]: I0105 22:04:04.959073 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.060842 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.060923 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.060975 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061010 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrbf\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061064 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061094 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061142 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061166 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061189 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061217 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061258 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061290 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061358 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.061393 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065213 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065299 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065326 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065407 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065482 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.065976 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.066250 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.067101 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.067519 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.067773 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.069097 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.071903 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.073628 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.081273 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrbf\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.159056 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.699474 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv"] Jan 05 22:04:05 crc kubenswrapper[5000]: I0105 22:04:05.770585 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" event={"ID":"854b990c-d8e5-4735-b5d4-a522969647e9","Type":"ContainerStarted","Data":"db318df03571bed2b2410238f30df7409f7466769c4a1965cecea9ed840004d0"} Jan 05 22:04:06 crc kubenswrapper[5000]: I0105 22:04:06.121204 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:04:06 crc kubenswrapper[5000]: I0105 22:04:06.780162 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" event={"ID":"854b990c-d8e5-4735-b5d4-a522969647e9","Type":"ContainerStarted","Data":"d64b87e909f5e672f056a74a3810cef7652e2ca4d8e439bce3baa98467933a6a"} Jan 05 22:04:06 crc kubenswrapper[5000]: I0105 22:04:06.808842 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" podStartSLOduration=2.396997183 podStartE2EDuration="2.808824217s" podCreationTimestamp="2026-01-05 22:04:04 +0000 UTC" firstStartedPulling="2026-01-05 22:04:05.706680882 +0000 UTC m=+1800.662883351" lastFinishedPulling="2026-01-05 22:04:06.118507926 +0000 UTC m=+1801.074710385" observedRunningTime="2026-01-05 22:04:06.808704723 +0000 UTC m=+1801.764907212" watchObservedRunningTime="2026-01-05 22:04:06.808824217 +0000 UTC m=+1801.765026706" Jan 05 22:04:12 crc kubenswrapper[5000]: I0105 22:04:12.325477 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:04:12 crc kubenswrapper[5000]: E0105 22:04:12.326738 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:04:20 crc kubenswrapper[5000]: I0105 22:04:20.126463 5000 scope.go:117] "RemoveContainer" containerID="7132fd9b996a4bceb12335f82b2a026959db3ddb2c21fdf80e307daf2150bf3b" Jan 05 22:04:25 crc kubenswrapper[5000]: I0105 22:04:25.335198 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:04:25 crc kubenswrapper[5000]: E0105 22:04:25.336930 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:04:41 crc kubenswrapper[5000]: I0105 22:04:41.323664 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:04:41 crc kubenswrapper[5000]: E0105 22:04:41.324515 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:04:42 crc kubenswrapper[5000]: I0105 22:04:42.085809 5000 generic.go:334] "Generic (PLEG): container finished" podID="854b990c-d8e5-4735-b5d4-a522969647e9" containerID="d64b87e909f5e672f056a74a3810cef7652e2ca4d8e439bce3baa98467933a6a" exitCode=0 Jan 05 22:04:42 crc kubenswrapper[5000]: I0105 22:04:42.085846 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" event={"ID":"854b990c-d8e5-4735-b5d4-a522969647e9","Type":"ContainerDied","Data":"d64b87e909f5e672f056a74a3810cef7652e2ca4d8e439bce3baa98467933a6a"} Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.494836 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.619792 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.619856 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.619875 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.619943 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620684 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620720 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620765 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620787 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620811 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrbf\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620836 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620868 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.620929 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.621032 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.621069 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle\") pod \"854b990c-d8e5-4735-b5d4-a522969647e9\" (UID: \"854b990c-d8e5-4735-b5d4-a522969647e9\") " Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.626256 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.626730 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.626783 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.626873 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.626921 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.627369 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.627700 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.628277 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.630386 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.630429 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf" (OuterVolumeSpecName: "kube-api-access-2jrbf") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "kube-api-access-2jrbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.635335 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.636016 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.664758 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.669875 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory" (OuterVolumeSpecName: "inventory") pod "854b990c-d8e5-4735-b5d4-a522969647e9" (UID: "854b990c-d8e5-4735-b5d4-a522969647e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723096 5000 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723126 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723137 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723148 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723159 5000 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723168 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723177 5000 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723186 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jrbf\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-kube-api-access-2jrbf\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723195 5000 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723203 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/854b990c-d8e5-4735-b5d4-a522969647e9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723212 5000 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723221 5000 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723228 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:43 crc kubenswrapper[5000]: I0105 22:04:43.723237 5000 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854b990c-d8e5-4735-b5d4-a522969647e9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.105966 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" event={"ID":"854b990c-d8e5-4735-b5d4-a522969647e9","Type":"ContainerDied","Data":"db318df03571bed2b2410238f30df7409f7466769c4a1965cecea9ed840004d0"} Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.106009 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db318df03571bed2b2410238f30df7409f7466769c4a1965cecea9ed840004d0" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.106084 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.194700 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt"] Jan 05 22:04:44 crc kubenswrapper[5000]: E0105 22:04:44.195083 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854b990c-d8e5-4735-b5d4-a522969647e9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.195101 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="854b990c-d8e5-4735-b5d4-a522969647e9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.195276 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="854b990c-d8e5-4735-b5d4-a522969647e9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.195845 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.199072 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.199288 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.199362 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.199361 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.199491 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.206717 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt"] Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.332325 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5b67\" (UniqueName: \"kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.332372 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.332408 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.332475 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.332548 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.433943 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.434431 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.434636 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5b67\" (UniqueName: \"kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.435239 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.435938 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.436789 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.439463 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.439952 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.440121 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.459075 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5b67\" (UniqueName: \"kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwldt\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:44 crc kubenswrapper[5000]: I0105 22:04:44.512610 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:04:45 crc kubenswrapper[5000]: I0105 22:04:45.014258 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt"] Jan 05 22:04:45 crc kubenswrapper[5000]: I0105 22:04:45.114806 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" event={"ID":"d4dde70e-892f-44c4-b19d-d2e6292c2e18","Type":"ContainerStarted","Data":"4c5bd3f31eb13886d9369337588709555732921003bb33bf11f775caf9777566"} Jan 05 22:04:46 crc kubenswrapper[5000]: I0105 22:04:46.124697 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" event={"ID":"d4dde70e-892f-44c4-b19d-d2e6292c2e18","Type":"ContainerStarted","Data":"faaab452aa8d7a75a71d1d34facfbd4b312955264ecd044b657edbadacfc30ea"} Jan 05 22:04:46 crc kubenswrapper[5000]: I0105 22:04:46.149489 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" podStartSLOduration=1.702661164 podStartE2EDuration="2.149463701s" podCreationTimestamp="2026-01-05 22:04:44 +0000 UTC" firstStartedPulling="2026-01-05 22:04:45.020560669 +0000 UTC m=+1839.976763138" lastFinishedPulling="2026-01-05 22:04:45.467363206 +0000 UTC m=+1840.423565675" observedRunningTime="2026-01-05 22:04:46.139216559 +0000 UTC m=+1841.095419028" watchObservedRunningTime="2026-01-05 22:04:46.149463701 +0000 UTC m=+1841.105666180" Jan 05 22:04:56 crc kubenswrapper[5000]: I0105 22:04:56.324221 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:04:57 crc kubenswrapper[5000]: I0105 22:04:57.222970 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7"} Jan 05 22:05:51 crc kubenswrapper[5000]: E0105 22:05:51.069844 5000 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4dde70e_892f_44c4_b19d_d2e6292c2e18.slice/crio-conmon-faaab452aa8d7a75a71d1d34facfbd4b312955264ecd044b657edbadacfc30ea.scope\": RecentStats: unable to find data in memory cache]" Jan 05 22:05:51 crc kubenswrapper[5000]: I0105 22:05:51.711921 5000 generic.go:334] "Generic (PLEG): container finished" podID="d4dde70e-892f-44c4-b19d-d2e6292c2e18" containerID="faaab452aa8d7a75a71d1d34facfbd4b312955264ecd044b657edbadacfc30ea" exitCode=0 Jan 05 22:05:51 crc kubenswrapper[5000]: I0105 22:05:51.711958 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" event={"ID":"d4dde70e-892f-44c4-b19d-d2e6292c2e18","Type":"ContainerDied","Data":"faaab452aa8d7a75a71d1d34facfbd4b312955264ecd044b657edbadacfc30ea"} Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.167112 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.212823 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0\") pod \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.212922 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle\") pod \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.213054 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key\") pod \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.213127 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory\") pod \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.213251 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5b67\" (UniqueName: \"kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67\") pod \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\" (UID: \"d4dde70e-892f-44c4-b19d-d2e6292c2e18\") " Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.219959 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67" (OuterVolumeSpecName: "kube-api-access-h5b67") pod "d4dde70e-892f-44c4-b19d-d2e6292c2e18" (UID: "d4dde70e-892f-44c4-b19d-d2e6292c2e18"). InnerVolumeSpecName "kube-api-access-h5b67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.222648 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d4dde70e-892f-44c4-b19d-d2e6292c2e18" (UID: "d4dde70e-892f-44c4-b19d-d2e6292c2e18"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.239565 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d4dde70e-892f-44c4-b19d-d2e6292c2e18" (UID: "d4dde70e-892f-44c4-b19d-d2e6292c2e18"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.245347 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory" (OuterVolumeSpecName: "inventory") pod "d4dde70e-892f-44c4-b19d-d2e6292c2e18" (UID: "d4dde70e-892f-44c4-b19d-d2e6292c2e18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.255484 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4dde70e-892f-44c4-b19d-d2e6292c2e18" (UID: "d4dde70e-892f-44c4-b19d-d2e6292c2e18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.315752 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.315781 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5b67\" (UniqueName: \"kubernetes.io/projected/d4dde70e-892f-44c4-b19d-d2e6292c2e18-kube-api-access-h5b67\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.315793 5000 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.315802 5000 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.315811 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4dde70e-892f-44c4-b19d-d2e6292c2e18-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.732137 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" event={"ID":"d4dde70e-892f-44c4-b19d-d2e6292c2e18","Type":"ContainerDied","Data":"4c5bd3f31eb13886d9369337588709555732921003bb33bf11f775caf9777566"} Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.732185 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5bd3f31eb13886d9369337588709555732921003bb33bf11f775caf9777566" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.732231 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwldt" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.895569 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg"] Jan 05 22:05:53 crc kubenswrapper[5000]: E0105 22:05:53.895945 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dde70e-892f-44c4-b19d-d2e6292c2e18" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.895961 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dde70e-892f-44c4-b19d-d2e6292c2e18" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.896159 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dde70e-892f-44c4-b19d-d2e6292c2e18" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.897039 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.899964 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.900577 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.901122 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.901711 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.901917 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.903011 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 05 22:05:53 crc kubenswrapper[5000]: I0105 22:05:53.904827 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg"] Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.028505 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.028581 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.028603 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.028805 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.029114 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.029227 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpghm\" (UniqueName: \"kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130474 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130548 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130567 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130605 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130687 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.130709 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpghm\" (UniqueName: \"kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.134984 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.135166 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.136978 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.141972 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.143012 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.160591 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpghm\" (UniqueName: \"kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.219877 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.775379 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg"] Jan 05 22:05:54 crc kubenswrapper[5000]: I0105 22:05:54.782152 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:05:55 crc kubenswrapper[5000]: I0105 22:05:55.747906 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" event={"ID":"e386442b-3735-4e85-8361-5a795c888c81","Type":"ContainerStarted","Data":"ec5d526a8bf218fb981d24e891293daaf02021c707477346faec6b6de1d3331d"} Jan 05 22:05:55 crc kubenswrapper[5000]: I0105 22:05:55.748671 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" event={"ID":"e386442b-3735-4e85-8361-5a795c888c81","Type":"ContainerStarted","Data":"a166cadd6562f712a33d0f5f30da046be135397d43abc446e5ad642ad9a648dc"} Jan 05 22:05:55 crc kubenswrapper[5000]: I0105 22:05:55.773006 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" podStartSLOduration=2.343811484 podStartE2EDuration="2.772984699s" podCreationTimestamp="2026-01-05 22:05:53 +0000 UTC" firstStartedPulling="2026-01-05 22:05:54.781770881 +0000 UTC m=+1909.737973350" lastFinishedPulling="2026-01-05 22:05:55.210944096 +0000 UTC m=+1910.167146565" observedRunningTime="2026-01-05 22:05:55.764543208 +0000 UTC m=+1910.720745677" watchObservedRunningTime="2026-01-05 22:05:55.772984699 +0000 UTC m=+1910.729187178" Jan 05 22:06:44 crc kubenswrapper[5000]: I0105 22:06:44.134802 5000 generic.go:334] "Generic (PLEG): container finished" podID="e386442b-3735-4e85-8361-5a795c888c81" containerID="ec5d526a8bf218fb981d24e891293daaf02021c707477346faec6b6de1d3331d" exitCode=0 Jan 05 22:06:44 crc kubenswrapper[5000]: I0105 22:06:44.134862 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" event={"ID":"e386442b-3735-4e85-8361-5a795c888c81","Type":"ContainerDied","Data":"ec5d526a8bf218fb981d24e891293daaf02021c707477346faec6b6de1d3331d"} Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.557405 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.664431 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.664841 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.665132 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.665411 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.665778 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.666095 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpghm\" (UniqueName: \"kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm\") pod \"e386442b-3735-4e85-8361-5a795c888c81\" (UID: \"e386442b-3735-4e85-8361-5a795c888c81\") " Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.672492 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.677177 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm" (OuterVolumeSpecName: "kube-api-access-rpghm") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "kube-api-access-rpghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.692170 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.692200 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.696849 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory" (OuterVolumeSpecName: "inventory") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.699236 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e386442b-3735-4e85-8361-5a795c888c81" (UID: "e386442b-3735-4e85-8361-5a795c888c81"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769431 5000 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769507 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769521 5000 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769532 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769547 5000 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e386442b-3735-4e85-8361-5a795c888c81-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:45 crc kubenswrapper[5000]: I0105 22:06:45.769560 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpghm\" (UniqueName: \"kubernetes.io/projected/e386442b-3735-4e85-8361-5a795c888c81-kube-api-access-rpghm\") on node \"crc\" DevicePath \"\"" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.155584 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" event={"ID":"e386442b-3735-4e85-8361-5a795c888c81","Type":"ContainerDied","Data":"a166cadd6562f712a33d0f5f30da046be135397d43abc446e5ad642ad9a648dc"} Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.155621 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a166cadd6562f712a33d0f5f30da046be135397d43abc446e5ad642ad9a648dc" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.155670 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.253572 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw"] Jan 05 22:06:46 crc kubenswrapper[5000]: E0105 22:06:46.254040 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e386442b-3735-4e85-8361-5a795c888c81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.254061 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e386442b-3735-4e85-8361-5a795c888c81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.254352 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e386442b-3735-4e85-8361-5a795c888c81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.255088 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.257374 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.257808 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.258355 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.259453 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.262707 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.268859 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw"] Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.382202 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.382589 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.382675 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xmh\" (UniqueName: \"kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.382854 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.383020 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.484579 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.484853 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.484879 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99xmh\" (UniqueName: \"kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.484956 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.485010 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.490292 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.490460 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.490671 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.498129 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.503486 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99xmh\" (UniqueName: \"kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:46 crc kubenswrapper[5000]: I0105 22:06:46.571620 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:06:47 crc kubenswrapper[5000]: I0105 22:06:47.076993 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw"] Jan 05 22:06:47 crc kubenswrapper[5000]: I0105 22:06:47.165064 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" event={"ID":"d3f9a210-263c-4290-8509-6b86ade6772c","Type":"ContainerStarted","Data":"29db8dd7938f1776c66ef2416adb6da350d54ebbff1501fff3926f0a62b3994c"} Jan 05 22:06:48 crc kubenswrapper[5000]: I0105 22:06:48.176360 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" event={"ID":"d3f9a210-263c-4290-8509-6b86ade6772c","Type":"ContainerStarted","Data":"340558407dc51c383168d19640eb0f91918719454663e731d811ca0e3adfd75c"} Jan 05 22:06:48 crc kubenswrapper[5000]: I0105 22:06:48.203005 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" podStartSLOduration=1.739888943 podStartE2EDuration="2.202988615s" podCreationTimestamp="2026-01-05 22:06:46 +0000 UTC" firstStartedPulling="2026-01-05 22:06:47.087125304 +0000 UTC m=+1962.043327773" lastFinishedPulling="2026-01-05 22:06:47.550224976 +0000 UTC m=+1962.506427445" observedRunningTime="2026-01-05 22:06:48.192365712 +0000 UTC m=+1963.148568181" watchObservedRunningTime="2026-01-05 22:06:48.202988615 +0000 UTC m=+1963.159191084" Jan 05 22:07:23 crc kubenswrapper[5000]: I0105 22:07:23.098371 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:07:23 crc kubenswrapper[5000]: I0105 22:07:23.098956 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.035385 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.041470 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.050686 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.124128 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.124175 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcf9\" (UniqueName: \"kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.124307 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.226776 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.226823 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggcf9\" (UniqueName: \"kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.226933 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.227429 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.227470 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.251248 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggcf9\" (UniqueName: \"kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9\") pod \"community-operators-wnjtq\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.378314 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:07:50 crc kubenswrapper[5000]: I0105 22:07:50.881883 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:07:51 crc kubenswrapper[5000]: I0105 22:07:51.723482 5000 generic.go:334] "Generic (PLEG): container finished" podID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerID="b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23" exitCode=0 Jan 05 22:07:51 crc kubenswrapper[5000]: I0105 22:07:51.723537 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerDied","Data":"b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23"} Jan 05 22:07:51 crc kubenswrapper[5000]: I0105 22:07:51.723907 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerStarted","Data":"345cd3f70f1c1bd0eb8f5d49326d903eb974c7afe71f8aa16640589ddbd8164b"} Jan 05 22:07:53 crc kubenswrapper[5000]: I0105 22:07:53.098998 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:07:53 crc kubenswrapper[5000]: I0105 22:07:53.099327 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:07:53 crc kubenswrapper[5000]: I0105 22:07:53.740557 5000 generic.go:334] "Generic (PLEG): container finished" podID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerID="3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897" exitCode=0 Jan 05 22:07:53 crc kubenswrapper[5000]: I0105 22:07:53.740616 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerDied","Data":"3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897"} Jan 05 22:07:54 crc kubenswrapper[5000]: I0105 22:07:54.749964 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerStarted","Data":"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433"} Jan 05 22:07:54 crc kubenswrapper[5000]: I0105 22:07:54.774272 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnjtq" podStartSLOduration=2.25840816 podStartE2EDuration="4.774255861s" podCreationTimestamp="2026-01-05 22:07:50 +0000 UTC" firstStartedPulling="2026-01-05 22:07:51.726458857 +0000 UTC m=+2026.682661346" lastFinishedPulling="2026-01-05 22:07:54.242306578 +0000 UTC m=+2029.198509047" observedRunningTime="2026-01-05 22:07:54.769087064 +0000 UTC m=+2029.725289533" watchObservedRunningTime="2026-01-05 22:07:54.774255861 +0000 UTC m=+2029.730458330" Jan 05 22:08:00 crc kubenswrapper[5000]: I0105 22:08:00.379026 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:00 crc kubenswrapper[5000]: I0105 22:08:00.380260 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:00 crc kubenswrapper[5000]: I0105 22:08:00.424597 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:00 crc kubenswrapper[5000]: I0105 22:08:00.840843 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:02 crc kubenswrapper[5000]: I0105 22:08:02.413908 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:08:02 crc kubenswrapper[5000]: I0105 22:08:02.815816 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnjtq" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="registry-server" containerID="cri-o://ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433" gracePeriod=2 Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.208067 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.368166 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities\") pod \"f2bba297-120f-43ec-ae34-d5e867d444ef\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.368758 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content\") pod \"f2bba297-120f-43ec-ae34-d5e867d444ef\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.368980 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcf9\" (UniqueName: \"kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9\") pod \"f2bba297-120f-43ec-ae34-d5e867d444ef\" (UID: \"f2bba297-120f-43ec-ae34-d5e867d444ef\") " Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.368902 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities" (OuterVolumeSpecName: "utilities") pod "f2bba297-120f-43ec-ae34-d5e867d444ef" (UID: "f2bba297-120f-43ec-ae34-d5e867d444ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.370126 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.374644 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9" (OuterVolumeSpecName: "kube-api-access-ggcf9") pod "f2bba297-120f-43ec-ae34-d5e867d444ef" (UID: "f2bba297-120f-43ec-ae34-d5e867d444ef"). InnerVolumeSpecName "kube-api-access-ggcf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.421813 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2bba297-120f-43ec-ae34-d5e867d444ef" (UID: "f2bba297-120f-43ec-ae34-d5e867d444ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.471311 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bba297-120f-43ec-ae34-d5e867d444ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.471363 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggcf9\" (UniqueName: \"kubernetes.io/projected/f2bba297-120f-43ec-ae34-d5e867d444ef-kube-api-access-ggcf9\") on node \"crc\" DevicePath \"\"" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.826569 5000 generic.go:334] "Generic (PLEG): container finished" podID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerID="ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433" exitCode=0 Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.826615 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerDied","Data":"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433"} Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.826643 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnjtq" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.826668 5000 scope.go:117] "RemoveContainer" containerID="ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.826654 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnjtq" event={"ID":"f2bba297-120f-43ec-ae34-d5e867d444ef","Type":"ContainerDied","Data":"345cd3f70f1c1bd0eb8f5d49326d903eb974c7afe71f8aa16640589ddbd8164b"} Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.850264 5000 scope.go:117] "RemoveContainer" containerID="3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.857903 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.866606 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnjtq"] Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.883232 5000 scope.go:117] "RemoveContainer" containerID="b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.924343 5000 scope.go:117] "RemoveContainer" containerID="ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433" Jan 05 22:08:03 crc kubenswrapper[5000]: E0105 22:08:03.925135 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433\": container with ID starting with ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433 not found: ID does not exist" containerID="ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.925196 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433"} err="failed to get container status \"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433\": rpc error: code = NotFound desc = could not find container \"ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433\": container with ID starting with ffb5d1a77a334b52cbb4ef87468ef734e5bf1cb58e8c97e2550e4c6758187433 not found: ID does not exist" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.925229 5000 scope.go:117] "RemoveContainer" containerID="3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897" Jan 05 22:08:03 crc kubenswrapper[5000]: E0105 22:08:03.925818 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897\": container with ID starting with 3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897 not found: ID does not exist" containerID="3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.925917 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897"} err="failed to get container status \"3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897\": rpc error: code = NotFound desc = could not find container \"3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897\": container with ID starting with 3a21a35f374085edd018fe6518b29596500fc9b46d2f5938c46bba3c6bbcc897 not found: ID does not exist" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.925956 5000 scope.go:117] "RemoveContainer" containerID="b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23" Jan 05 22:08:03 crc kubenswrapper[5000]: E0105 22:08:03.926392 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23\": container with ID starting with b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23 not found: ID does not exist" containerID="b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23" Jan 05 22:08:03 crc kubenswrapper[5000]: I0105 22:08:03.926446 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23"} err="failed to get container status \"b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23\": rpc error: code = NotFound desc = could not find container \"b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23\": container with ID starting with b9ae201ea892c7690ee1ad6306b28c0de6930c77a75329f4f35e66dae5aa4f23 not found: ID does not exist" Jan 05 22:08:05 crc kubenswrapper[5000]: I0105 22:08:05.335789 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" path="/var/lib/kubelet/pods/f2bba297-120f-43ec-ae34-d5e867d444ef/volumes" Jan 05 22:08:23 crc kubenswrapper[5000]: I0105 22:08:23.098816 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:08:23 crc kubenswrapper[5000]: I0105 22:08:23.099554 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:08:23 crc kubenswrapper[5000]: I0105 22:08:23.099623 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:08:23 crc kubenswrapper[5000]: I0105 22:08:23.100698 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:08:23 crc kubenswrapper[5000]: I0105 22:08:23.100809 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7" gracePeriod=600 Jan 05 22:08:24 crc kubenswrapper[5000]: I0105 22:08:24.012347 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7" exitCode=0 Jan 05 22:08:24 crc kubenswrapper[5000]: I0105 22:08:24.012627 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7"} Jan 05 22:08:24 crc kubenswrapper[5000]: I0105 22:08:24.012661 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558"} Jan 05 22:08:24 crc kubenswrapper[5000]: I0105 22:08:24.012679 5000 scope.go:117] "RemoveContainer" containerID="3cc271e38bc4d23ddc0d12e0ef028e91290ce7eb7dc24613b2355e8255800269" Jan 05 22:10:23 crc kubenswrapper[5000]: I0105 22:10:23.098921 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:10:23 crc kubenswrapper[5000]: I0105 22:10:23.099509 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.147740 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:10:42 crc kubenswrapper[5000]: E0105 22:10:42.148818 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="registry-server" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.148833 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="registry-server" Jan 05 22:10:42 crc kubenswrapper[5000]: E0105 22:10:42.148865 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="extract-content" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.148872 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="extract-content" Jan 05 22:10:42 crc kubenswrapper[5000]: E0105 22:10:42.148901 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="extract-utilities" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.148910 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="extract-utilities" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.149138 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bba297-120f-43ec-ae34-d5e867d444ef" containerName="registry-server" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.151783 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.164218 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.325356 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.325469 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.325560 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzth6\" (UniqueName: \"kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.427146 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.427362 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzth6\" (UniqueName: \"kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.427493 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.427771 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.428144 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.450326 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzth6\" (UniqueName: \"kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6\") pod \"redhat-operators-jdx2j\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.474490 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:42 crc kubenswrapper[5000]: I0105 22:10:42.939861 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:10:43 crc kubenswrapper[5000]: I0105 22:10:43.306465 5000 generic.go:334] "Generic (PLEG): container finished" podID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerID="148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110" exitCode=0 Jan 05 22:10:43 crc kubenswrapper[5000]: I0105 22:10:43.306563 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerDied","Data":"148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110"} Jan 05 22:10:43 crc kubenswrapper[5000]: I0105 22:10:43.308724 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerStarted","Data":"f0cd5c1205fc78de9323d710c43f05af9d333091a26bad867d8f06d30cce326f"} Jan 05 22:10:45 crc kubenswrapper[5000]: I0105 22:10:45.407527 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerStarted","Data":"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d"} Jan 05 22:10:47 crc kubenswrapper[5000]: I0105 22:10:47.427575 5000 generic.go:334] "Generic (PLEG): container finished" podID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerID="7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d" exitCode=0 Jan 05 22:10:47 crc kubenswrapper[5000]: I0105 22:10:47.427635 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerDied","Data":"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d"} Jan 05 22:10:48 crc kubenswrapper[5000]: I0105 22:10:48.439565 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerStarted","Data":"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0"} Jan 05 22:10:48 crc kubenswrapper[5000]: I0105 22:10:48.441641 5000 generic.go:334] "Generic (PLEG): container finished" podID="d3f9a210-263c-4290-8509-6b86ade6772c" containerID="340558407dc51c383168d19640eb0f91918719454663e731d811ca0e3adfd75c" exitCode=0 Jan 05 22:10:48 crc kubenswrapper[5000]: I0105 22:10:48.441688 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" event={"ID":"d3f9a210-263c-4290-8509-6b86ade6772c","Type":"ContainerDied","Data":"340558407dc51c383168d19640eb0f91918719454663e731d811ca0e3adfd75c"} Jan 05 22:10:48 crc kubenswrapper[5000]: I0105 22:10:48.464740 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jdx2j" podStartSLOduration=1.686514407 podStartE2EDuration="6.464723234s" podCreationTimestamp="2026-01-05 22:10:42 +0000 UTC" firstStartedPulling="2026-01-05 22:10:43.308654135 +0000 UTC m=+2198.264856604" lastFinishedPulling="2026-01-05 22:10:48.086862942 +0000 UTC m=+2203.043065431" observedRunningTime="2026-01-05 22:10:48.458703633 +0000 UTC m=+2203.414906102" watchObservedRunningTime="2026-01-05 22:10:48.464723234 +0000 UTC m=+2203.420925703" Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.889571 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.910597 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle\") pod \"d3f9a210-263c-4290-8509-6b86ade6772c\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.910697 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0\") pod \"d3f9a210-263c-4290-8509-6b86ade6772c\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.910740 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory\") pod \"d3f9a210-263c-4290-8509-6b86ade6772c\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.916592 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d3f9a210-263c-4290-8509-6b86ade6772c" (UID: "d3f9a210-263c-4290-8509-6b86ade6772c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.950503 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d3f9a210-263c-4290-8509-6b86ade6772c" (UID: "d3f9a210-263c-4290-8509-6b86ade6772c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:10:49 crc kubenswrapper[5000]: I0105 22:10:49.956699 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory" (OuterVolumeSpecName: "inventory") pod "d3f9a210-263c-4290-8509-6b86ade6772c" (UID: "d3f9a210-263c-4290-8509-6b86ade6772c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.011870 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key\") pod \"d3f9a210-263c-4290-8509-6b86ade6772c\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.011913 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99xmh\" (UniqueName: \"kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh\") pod \"d3f9a210-263c-4290-8509-6b86ade6772c\" (UID: \"d3f9a210-263c-4290-8509-6b86ade6772c\") " Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.012191 5000 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.012208 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.012217 5000 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.017802 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh" (OuterVolumeSpecName: "kube-api-access-99xmh") pod "d3f9a210-263c-4290-8509-6b86ade6772c" (UID: "d3f9a210-263c-4290-8509-6b86ade6772c"). InnerVolumeSpecName "kube-api-access-99xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.053960 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d3f9a210-263c-4290-8509-6b86ade6772c" (UID: "d3f9a210-263c-4290-8509-6b86ade6772c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.113538 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3f9a210-263c-4290-8509-6b86ade6772c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.113579 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99xmh\" (UniqueName: \"kubernetes.io/projected/d3f9a210-263c-4290-8509-6b86ade6772c-kube-api-access-99xmh\") on node \"crc\" DevicePath \"\"" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.461614 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" event={"ID":"d3f9a210-263c-4290-8509-6b86ade6772c","Type":"ContainerDied","Data":"29db8dd7938f1776c66ef2416adb6da350d54ebbff1501fff3926f0a62b3994c"} Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.461660 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29db8dd7938f1776c66ef2416adb6da350d54ebbff1501fff3926f0a62b3994c" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.461733 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.646633 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt"] Jan 05 22:10:50 crc kubenswrapper[5000]: E0105 22:10:50.647268 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f9a210-263c-4290-8509-6b86ade6772c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.647380 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f9a210-263c-4290-8509-6b86ade6772c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.647610 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f9a210-263c-4290-8509-6b86ade6772c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.648292 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.650313 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.650925 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.651208 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.651268 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.651464 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.652620 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.654509 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.665314 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt"] Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831195 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831594 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831674 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831732 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831774 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831800 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831841 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831876 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.831906 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j249z\" (UniqueName: \"kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933268 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933342 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933418 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933485 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933530 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933561 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933611 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933655 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.933689 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j249z\" (UniqueName: \"kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.934548 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.939533 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.940149 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.940296 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.940844 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.941290 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.941824 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.943378 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.953122 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j249z\" (UniqueName: \"kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64gt\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:50 crc kubenswrapper[5000]: I0105 22:10:50.965032 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:10:51 crc kubenswrapper[5000]: W0105 22:10:51.675488 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f95f21_c8bd_4de7_8f5b_1e236a1d5d7c.slice/crio-e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59 WatchSource:0}: Error finding container e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59: Status 404 returned error can't find the container with id e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59 Jan 05 22:10:51 crc kubenswrapper[5000]: I0105 22:10:51.675577 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt"] Jan 05 22:10:52 crc kubenswrapper[5000]: I0105 22:10:52.474806 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:52 crc kubenswrapper[5000]: I0105 22:10:52.475165 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:10:52 crc kubenswrapper[5000]: I0105 22:10:52.479215 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" event={"ID":"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c","Type":"ContainerStarted","Data":"e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59"} Jan 05 22:10:53 crc kubenswrapper[5000]: I0105 22:10:53.098605 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:10:53 crc kubenswrapper[5000]: I0105 22:10:53.099014 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:10:53 crc kubenswrapper[5000]: I0105 22:10:53.487843 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" event={"ID":"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c","Type":"ContainerStarted","Data":"0cd5094db3c3aa149433fdbc1a854a5ca7f24ad114e41971d2a15cd35a8d191d"} Jan 05 22:10:53 crc kubenswrapper[5000]: I0105 22:10:53.509554 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" podStartSLOduration=2.6395573949999998 podStartE2EDuration="3.509535747s" podCreationTimestamp="2026-01-05 22:10:50 +0000 UTC" firstStartedPulling="2026-01-05 22:10:51.678285859 +0000 UTC m=+2206.634488338" lastFinishedPulling="2026-01-05 22:10:52.548264221 +0000 UTC m=+2207.504466690" observedRunningTime="2026-01-05 22:10:53.504563456 +0000 UTC m=+2208.460765955" watchObservedRunningTime="2026-01-05 22:10:53.509535747 +0000 UTC m=+2208.465738216" Jan 05 22:10:53 crc kubenswrapper[5000]: I0105 22:10:53.531144 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jdx2j" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="registry-server" probeResult="failure" output=< Jan 05 22:10:53 crc kubenswrapper[5000]: timeout: failed to connect service ":50051" within 1s Jan 05 22:10:53 crc kubenswrapper[5000]: > Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.674467 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.676574 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.688058 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.823751 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.823906 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.824384 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkz6\" (UniqueName: \"kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.926090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkkz6\" (UniqueName: \"kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.926160 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.926204 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.927032 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.927037 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.947673 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkkz6\" (UniqueName: \"kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6\") pod \"redhat-marketplace-8qrwf\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:55 crc kubenswrapper[5000]: I0105 22:10:55.997940 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:10:56 crc kubenswrapper[5000]: W0105 22:10:56.470809 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b4ba5ec_8b20_43d2_a302_c390d4a45107.slice/crio-f8064989f5da1fd8f14f9347525a712b06bda21673f5a8e99ffc2827e9715550 WatchSource:0}: Error finding container f8064989f5da1fd8f14f9347525a712b06bda21673f5a8e99ffc2827e9715550: Status 404 returned error can't find the container with id f8064989f5da1fd8f14f9347525a712b06bda21673f5a8e99ffc2827e9715550 Jan 05 22:10:56 crc kubenswrapper[5000]: I0105 22:10:56.471453 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:10:56 crc kubenswrapper[5000]: I0105 22:10:56.511731 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerStarted","Data":"f8064989f5da1fd8f14f9347525a712b06bda21673f5a8e99ffc2827e9715550"} Jan 05 22:10:57 crc kubenswrapper[5000]: I0105 22:10:57.523049 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerID="7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e" exitCode=0 Jan 05 22:10:57 crc kubenswrapper[5000]: I0105 22:10:57.523157 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerDied","Data":"7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e"} Jan 05 22:10:57 crc kubenswrapper[5000]: I0105 22:10:57.524774 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:10:58 crc kubenswrapper[5000]: I0105 22:10:58.533263 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerStarted","Data":"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135"} Jan 05 22:10:59 crc kubenswrapper[5000]: I0105 22:10:59.543968 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerID="f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135" exitCode=0 Jan 05 22:10:59 crc kubenswrapper[5000]: I0105 22:10:59.544031 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerDied","Data":"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135"} Jan 05 22:11:00 crc kubenswrapper[5000]: I0105 22:11:00.554749 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerStarted","Data":"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c"} Jan 05 22:11:00 crc kubenswrapper[5000]: I0105 22:11:00.570823 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8qrwf" podStartSLOduration=3.108014722 podStartE2EDuration="5.570805494s" podCreationTimestamp="2026-01-05 22:10:55 +0000 UTC" firstStartedPulling="2026-01-05 22:10:57.524565416 +0000 UTC m=+2212.480767885" lastFinishedPulling="2026-01-05 22:10:59.987356188 +0000 UTC m=+2214.943558657" observedRunningTime="2026-01-05 22:11:00.569584189 +0000 UTC m=+2215.525786658" watchObservedRunningTime="2026-01-05 22:11:00.570805494 +0000 UTC m=+2215.527007963" Jan 05 22:11:02 crc kubenswrapper[5000]: I0105 22:11:02.516881 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:11:02 crc kubenswrapper[5000]: I0105 22:11:02.584773 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:11:03 crc kubenswrapper[5000]: I0105 22:11:03.049837 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:11:03 crc kubenswrapper[5000]: I0105 22:11:03.584686 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jdx2j" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="registry-server" containerID="cri-o://4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0" gracePeriod=2 Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.072964 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.165361 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content\") pod \"a929b189-02ed-46ee-91a4-9f69f2704ba2\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.165547 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzth6\" (UniqueName: \"kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6\") pod \"a929b189-02ed-46ee-91a4-9f69f2704ba2\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.165598 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities\") pod \"a929b189-02ed-46ee-91a4-9f69f2704ba2\" (UID: \"a929b189-02ed-46ee-91a4-9f69f2704ba2\") " Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.166970 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities" (OuterVolumeSpecName: "utilities") pod "a929b189-02ed-46ee-91a4-9f69f2704ba2" (UID: "a929b189-02ed-46ee-91a4-9f69f2704ba2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.172475 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6" (OuterVolumeSpecName: "kube-api-access-zzth6") pod "a929b189-02ed-46ee-91a4-9f69f2704ba2" (UID: "a929b189-02ed-46ee-91a4-9f69f2704ba2"). InnerVolumeSpecName "kube-api-access-zzth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.264901 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a929b189-02ed-46ee-91a4-9f69f2704ba2" (UID: "a929b189-02ed-46ee-91a4-9f69f2704ba2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.267366 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.267395 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzth6\" (UniqueName: \"kubernetes.io/projected/a929b189-02ed-46ee-91a4-9f69f2704ba2-kube-api-access-zzth6\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.267406 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a929b189-02ed-46ee-91a4-9f69f2704ba2-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.596054 5000 generic.go:334] "Generic (PLEG): container finished" podID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerID="4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0" exitCode=0 Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.596095 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerDied","Data":"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0"} Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.596124 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdx2j" event={"ID":"a929b189-02ed-46ee-91a4-9f69f2704ba2","Type":"ContainerDied","Data":"f0cd5c1205fc78de9323d710c43f05af9d333091a26bad867d8f06d30cce326f"} Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.596144 5000 scope.go:117] "RemoveContainer" containerID="4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.596160 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdx2j" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.616039 5000 scope.go:117] "RemoveContainer" containerID="7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.634552 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.643221 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jdx2j"] Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.647975 5000 scope.go:117] "RemoveContainer" containerID="148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.694336 5000 scope.go:117] "RemoveContainer" containerID="4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0" Jan 05 22:11:04 crc kubenswrapper[5000]: E0105 22:11:04.694903 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0\": container with ID starting with 4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0 not found: ID does not exist" containerID="4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.694936 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0"} err="failed to get container status \"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0\": rpc error: code = NotFound desc = could not find container \"4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0\": container with ID starting with 4ecd5ec17afb3cb31695688ad6a238f4971e03fa82ff5c2029d4bbb101bba1a0 not found: ID does not exist" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.694957 5000 scope.go:117] "RemoveContainer" containerID="7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d" Jan 05 22:11:04 crc kubenswrapper[5000]: E0105 22:11:04.696053 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d\": container with ID starting with 7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d not found: ID does not exist" containerID="7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.696084 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d"} err="failed to get container status \"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d\": rpc error: code = NotFound desc = could not find container \"7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d\": container with ID starting with 7869d2663b67c24709a4c63d4b0c57b7e85c1ba31eaabb7ddf2145636067a04d not found: ID does not exist" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.696135 5000 scope.go:117] "RemoveContainer" containerID="148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110" Jan 05 22:11:04 crc kubenswrapper[5000]: E0105 22:11:04.696445 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110\": container with ID starting with 148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110 not found: ID does not exist" containerID="148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110" Jan 05 22:11:04 crc kubenswrapper[5000]: I0105 22:11:04.696474 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110"} err="failed to get container status \"148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110\": rpc error: code = NotFound desc = could not find container \"148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110\": container with ID starting with 148a9bb38e43b0f41d229cc4f1b7e69f2392612acdd016e59c41587ac9cf8110 not found: ID does not exist" Jan 05 22:11:05 crc kubenswrapper[5000]: I0105 22:11:05.334537 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" path="/var/lib/kubelet/pods/a929b189-02ed-46ee-91a4-9f69f2704ba2/volumes" Jan 05 22:11:05 crc kubenswrapper[5000]: I0105 22:11:05.998451 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:05 crc kubenswrapper[5000]: I0105 22:11:05.998536 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:06 crc kubenswrapper[5000]: I0105 22:11:06.056272 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:06 crc kubenswrapper[5000]: I0105 22:11:06.665586 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:07 crc kubenswrapper[5000]: I0105 22:11:07.448113 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:11:08 crc kubenswrapper[5000]: I0105 22:11:08.627870 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8qrwf" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="registry-server" containerID="cri-o://b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c" gracePeriod=2 Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.094418 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.155667 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkkz6\" (UniqueName: \"kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6\") pod \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.155864 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content\") pod \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.155930 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities\") pod \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\" (UID: \"7b4ba5ec-8b20-43d2-a302-c390d4a45107\") " Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.156978 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities" (OuterVolumeSpecName: "utilities") pod "7b4ba5ec-8b20-43d2-a302-c390d4a45107" (UID: "7b4ba5ec-8b20-43d2-a302-c390d4a45107"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.161468 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6" (OuterVolumeSpecName: "kube-api-access-lkkz6") pod "7b4ba5ec-8b20-43d2-a302-c390d4a45107" (UID: "7b4ba5ec-8b20-43d2-a302-c390d4a45107"). InnerVolumeSpecName "kube-api-access-lkkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.177534 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b4ba5ec-8b20-43d2-a302-c390d4a45107" (UID: "7b4ba5ec-8b20-43d2-a302-c390d4a45107"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.257941 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkkz6\" (UniqueName: \"kubernetes.io/projected/7b4ba5ec-8b20-43d2-a302-c390d4a45107-kube-api-access-lkkz6\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.258225 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.258235 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b4ba5ec-8b20-43d2-a302-c390d4a45107-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.637650 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerID="b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c" exitCode=0 Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.637733 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8qrwf" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.637755 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerDied","Data":"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c"} Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.639124 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8qrwf" event={"ID":"7b4ba5ec-8b20-43d2-a302-c390d4a45107","Type":"ContainerDied","Data":"f8064989f5da1fd8f14f9347525a712b06bda21673f5a8e99ffc2827e9715550"} Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.639148 5000 scope.go:117] "RemoveContainer" containerID="b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.656981 5000 scope.go:117] "RemoveContainer" containerID="f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.671780 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.679203 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8qrwf"] Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.693869 5000 scope.go:117] "RemoveContainer" containerID="7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.727399 5000 scope.go:117] "RemoveContainer" containerID="b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c" Jan 05 22:11:09 crc kubenswrapper[5000]: E0105 22:11:09.728084 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c\": container with ID starting with b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c not found: ID does not exist" containerID="b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.728122 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c"} err="failed to get container status \"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c\": rpc error: code = NotFound desc = could not find container \"b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c\": container with ID starting with b85359987aeac8816072b6239adb63f8d068eaec59a30865fa042bb51334f59c not found: ID does not exist" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.728148 5000 scope.go:117] "RemoveContainer" containerID="f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135" Jan 05 22:11:09 crc kubenswrapper[5000]: E0105 22:11:09.728450 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135\": container with ID starting with f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135 not found: ID does not exist" containerID="f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.728482 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135"} err="failed to get container status \"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135\": rpc error: code = NotFound desc = could not find container \"f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135\": container with ID starting with f27d27a1fc9278123209d9006a3062719d4408f6ad10aba4549694ed7f9ac135 not found: ID does not exist" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.728494 5000 scope.go:117] "RemoveContainer" containerID="7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e" Jan 05 22:11:09 crc kubenswrapper[5000]: E0105 22:11:09.728828 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e\": container with ID starting with 7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e not found: ID does not exist" containerID="7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e" Jan 05 22:11:09 crc kubenswrapper[5000]: I0105 22:11:09.728851 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e"} err="failed to get container status \"7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e\": rpc error: code = NotFound desc = could not find container \"7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e\": container with ID starting with 7ff1274bcd1ad53140981de4faddd0f18f1706903be9d6760705df4c5cfcf95e not found: ID does not exist" Jan 05 22:11:11 crc kubenswrapper[5000]: I0105 22:11:11.333702 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" path="/var/lib/kubelet/pods/7b4ba5ec-8b20-43d2-a302-c390d4a45107/volumes" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.099197 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.100051 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.100130 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.101427 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.101556 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" gracePeriod=600 Jan 05 22:11:23 crc kubenswrapper[5000]: E0105 22:11:23.223522 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.757876 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" exitCode=0 Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.757994 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558"} Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.758291 5000 scope.go:117] "RemoveContainer" containerID="7e3da05d5f67590c9b2527cc500930111ded9c9f1144452852c6a5338d56bdf7" Jan 05 22:11:23 crc kubenswrapper[5000]: I0105 22:11:23.760307 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:11:23 crc kubenswrapper[5000]: E0105 22:11:23.760687 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:11:36 crc kubenswrapper[5000]: I0105 22:11:36.324247 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:11:36 crc kubenswrapper[5000]: E0105 22:11:36.325054 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.165221 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166096 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="extract-content" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166108 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="extract-content" Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166121 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="extract-utilities" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166127 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="extract-utilities" Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166142 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166149 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166172 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166179 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166188 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="extract-utilities" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166193 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="extract-utilities" Jan 05 22:11:41 crc kubenswrapper[5000]: E0105 22:11:41.166207 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="extract-content" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166213 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="extract-content" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166378 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a929b189-02ed-46ee-91a4-9f69f2704ba2" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.166393 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4ba5ec-8b20-43d2-a302-c390d4a45107" containerName="registry-server" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.167636 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.182677 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.299413 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.299459 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.299740 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7k9\" (UniqueName: \"kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.402207 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7k9\" (UniqueName: \"kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.402320 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.402345 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.402869 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.402932 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.423614 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7k9\" (UniqueName: \"kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9\") pod \"certified-operators-5fq9m\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:41 crc kubenswrapper[5000]: I0105 22:11:41.523704 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:42 crc kubenswrapper[5000]: I0105 22:11:42.002936 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:42 crc kubenswrapper[5000]: I0105 22:11:42.959441 5000 generic.go:334] "Generic (PLEG): container finished" podID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerID="77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9" exitCode=0 Jan 05 22:11:42 crc kubenswrapper[5000]: I0105 22:11:42.959494 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerDied","Data":"77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9"} Jan 05 22:11:42 crc kubenswrapper[5000]: I0105 22:11:42.959710 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerStarted","Data":"3e0870bf6ce4ade575097fffc286af8aa3c04eb7bc15c44e8a66b9a0c10f43eb"} Jan 05 22:11:44 crc kubenswrapper[5000]: I0105 22:11:44.989745 5000 generic.go:334] "Generic (PLEG): container finished" podID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerID="959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289" exitCode=0 Jan 05 22:11:44 crc kubenswrapper[5000]: I0105 22:11:44.989878 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerDied","Data":"959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289"} Jan 05 22:11:45 crc kubenswrapper[5000]: I0105 22:11:45.999301 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerStarted","Data":"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d"} Jan 05 22:11:46 crc kubenswrapper[5000]: I0105 22:11:46.021045 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fq9m" podStartSLOduration=2.371407868 podStartE2EDuration="5.021019801s" podCreationTimestamp="2026-01-05 22:11:41 +0000 UTC" firstStartedPulling="2026-01-05 22:11:42.961160846 +0000 UTC m=+2257.917363315" lastFinishedPulling="2026-01-05 22:11:45.610772779 +0000 UTC m=+2260.566975248" observedRunningTime="2026-01-05 22:11:46.012977943 +0000 UTC m=+2260.969180442" watchObservedRunningTime="2026-01-05 22:11:46.021019801 +0000 UTC m=+2260.977222270" Jan 05 22:11:48 crc kubenswrapper[5000]: I0105 22:11:48.324436 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:11:48 crc kubenswrapper[5000]: E0105 22:11:48.326097 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:11:51 crc kubenswrapper[5000]: I0105 22:11:51.524841 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:51 crc kubenswrapper[5000]: I0105 22:11:51.525239 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:51 crc kubenswrapper[5000]: I0105 22:11:51.578904 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:52 crc kubenswrapper[5000]: I0105 22:11:52.092079 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:52 crc kubenswrapper[5000]: I0105 22:11:52.148730 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.073735 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fq9m" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="registry-server" containerID="cri-o://60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d" gracePeriod=2 Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.507874 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.582523 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7k9\" (UniqueName: \"kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9\") pod \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.582759 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities\") pod \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.582875 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content\") pod \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\" (UID: \"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b\") " Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.583673 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities" (OuterVolumeSpecName: "utilities") pod "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" (UID: "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.589142 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9" (OuterVolumeSpecName: "kube-api-access-jc7k9") pod "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" (UID: "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b"). InnerVolumeSpecName "kube-api-access-jc7k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.684928 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.684967 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7k9\" (UniqueName: \"kubernetes.io/projected/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-kube-api-access-jc7k9\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.820326 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" (UID: "4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:11:54 crc kubenswrapper[5000]: I0105 22:11:54.890126 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.088933 5000 generic.go:334] "Generic (PLEG): container finished" podID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerID="60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d" exitCode=0 Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.088983 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerDied","Data":"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d"} Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.089017 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fq9m" event={"ID":"4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b","Type":"ContainerDied","Data":"3e0870bf6ce4ade575097fffc286af8aa3c04eb7bc15c44e8a66b9a0c10f43eb"} Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.089040 5000 scope.go:117] "RemoveContainer" containerID="60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.089039 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fq9m" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.131782 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.136325 5000 scope.go:117] "RemoveContainer" containerID="959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.139748 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fq9m"] Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.168658 5000 scope.go:117] "RemoveContainer" containerID="77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.213352 5000 scope.go:117] "RemoveContainer" containerID="60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d" Jan 05 22:11:55 crc kubenswrapper[5000]: E0105 22:11:55.213922 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d\": container with ID starting with 60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d not found: ID does not exist" containerID="60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.213963 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d"} err="failed to get container status \"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d\": rpc error: code = NotFound desc = could not find container \"60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d\": container with ID starting with 60570615cba5e8bbdcfde074edf33ecb813503085ac59452e274e272b20c922d not found: ID does not exist" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.213991 5000 scope.go:117] "RemoveContainer" containerID="959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289" Jan 05 22:11:55 crc kubenswrapper[5000]: E0105 22:11:55.214322 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289\": container with ID starting with 959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289 not found: ID does not exist" containerID="959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.214376 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289"} err="failed to get container status \"959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289\": rpc error: code = NotFound desc = could not find container \"959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289\": container with ID starting with 959cfce6644a9d4769549edd8fd685d8c172d5f5c23dd97dab0cfd1de6d6c289 not found: ID does not exist" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.214418 5000 scope.go:117] "RemoveContainer" containerID="77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9" Jan 05 22:11:55 crc kubenswrapper[5000]: E0105 22:11:55.215106 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9\": container with ID starting with 77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9 not found: ID does not exist" containerID="77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.215145 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9"} err="failed to get container status \"77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9\": rpc error: code = NotFound desc = could not find container \"77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9\": container with ID starting with 77519f75e39467d3c2d0271c44a8b414d340636ca2f675966f48777903b19fe9 not found: ID does not exist" Jan 05 22:11:55 crc kubenswrapper[5000]: I0105 22:11:55.337630 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" path="/var/lib/kubelet/pods/4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b/volumes" Jan 05 22:11:59 crc kubenswrapper[5000]: I0105 22:11:59.324939 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:11:59 crc kubenswrapper[5000]: E0105 22:11:59.325976 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:12:10 crc kubenswrapper[5000]: I0105 22:12:10.330757 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:12:10 crc kubenswrapper[5000]: E0105 22:12:10.331906 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:12:22 crc kubenswrapper[5000]: I0105 22:12:22.324048 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:12:22 crc kubenswrapper[5000]: E0105 22:12:22.325189 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:12:33 crc kubenswrapper[5000]: I0105 22:12:33.324484 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:12:33 crc kubenswrapper[5000]: E0105 22:12:33.325315 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:12:46 crc kubenswrapper[5000]: I0105 22:12:46.324425 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:12:46 crc kubenswrapper[5000]: E0105 22:12:46.325681 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:12:58 crc kubenswrapper[5000]: I0105 22:12:58.324470 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:12:58 crc kubenswrapper[5000]: E0105 22:12:58.325912 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:13:07 crc kubenswrapper[5000]: I0105 22:13:07.715773 5000 generic.go:334] "Generic (PLEG): container finished" podID="50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" containerID="0cd5094db3c3aa149433fdbc1a854a5ca7f24ad114e41971d2a15cd35a8d191d" exitCode=0 Jan 05 22:13:07 crc kubenswrapper[5000]: I0105 22:13:07.715871 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" event={"ID":"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c","Type":"ContainerDied","Data":"0cd5094db3c3aa149433fdbc1a854a5ca7f24ad114e41971d2a15cd35a8d191d"} Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.153160 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172209 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172405 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172438 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172460 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172501 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j249z\" (UniqueName: \"kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172520 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172560 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172638 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.172664 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1\") pod \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\" (UID: \"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c\") " Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.182995 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z" (OuterVolumeSpecName: "kube-api-access-j249z") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "kube-api-access-j249z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.190100 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.212555 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.213193 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.214350 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.215397 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.223761 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory" (OuterVolumeSpecName: "inventory") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.236689 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.237423 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" (UID: "50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.274737 5000 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.274866 5000 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.274947 5000 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275004 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275058 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275137 5000 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275208 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j249z\" (UniqueName: \"kubernetes.io/projected/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-kube-api-access-j249z\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275275 5000 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.275340 5000 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.731507 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" event={"ID":"50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c","Type":"ContainerDied","Data":"e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59"} Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.731555 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07b2d225404df433382ab24384e6543e4e30bbd4e693eeb817c448585788c59" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.731573 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64gt" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.842291 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd"] Jan 05 22:13:09 crc kubenswrapper[5000]: E0105 22:13:09.843091 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="extract-content" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.843180 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="extract-content" Jan 05 22:13:09 crc kubenswrapper[5000]: E0105 22:13:09.843260 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.843313 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 22:13:09 crc kubenswrapper[5000]: E0105 22:13:09.843386 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="extract-utilities" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.843461 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="extract-utilities" Jan 05 22:13:09 crc kubenswrapper[5000]: E0105 22:13:09.843532 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="registry-server" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.843591 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="registry-server" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.844069 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a6d4eb4-2a3f-45d4-b6d9-fa62d40efa8b" containerName="registry-server" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.844188 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.848066 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.851790 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-46vtl" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.851967 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.852212 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.852070 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.853402 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.856168 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd"] Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.885534 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjn78\" (UniqueName: \"kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.885868 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.886060 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.886175 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.886299 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.886468 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.886571 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988398 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988469 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988511 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988527 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988551 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjn78\" (UniqueName: \"kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.988588 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.989326 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.996043 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.996043 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.996367 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.996568 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.996731 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:09 crc kubenswrapper[5000]: I0105 22:13:09.997085 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:10 crc kubenswrapper[5000]: I0105 22:13:10.005711 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjn78\" (UniqueName: \"kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:10 crc kubenswrapper[5000]: I0105 22:13:10.170078 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:13:10 crc kubenswrapper[5000]: I0105 22:13:10.709344 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd"] Jan 05 22:13:10 crc kubenswrapper[5000]: I0105 22:13:10.742507 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" event={"ID":"9457bd68-0fcd-45ee-9625-4a82d4ad181d","Type":"ContainerStarted","Data":"ca8fb2ea95a7ab1b8f0fc67f219af0afaab088c1f1cdaaafc6f3a1fa3fee46cf"} Jan 05 22:13:11 crc kubenswrapper[5000]: I0105 22:13:11.751581 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" event={"ID":"9457bd68-0fcd-45ee-9625-4a82d4ad181d","Type":"ContainerStarted","Data":"d66271380c8d4ee438e52f01c952b4b71b8abf926cd13d1cf98a781ec682a1a2"} Jan 05 22:13:11 crc kubenswrapper[5000]: I0105 22:13:11.776375 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" podStartSLOduration=2.300932028 podStartE2EDuration="2.776358964s" podCreationTimestamp="2026-01-05 22:13:09 +0000 UTC" firstStartedPulling="2026-01-05 22:13:10.713106747 +0000 UTC m=+2345.669309236" lastFinishedPulling="2026-01-05 22:13:11.188533693 +0000 UTC m=+2346.144736172" observedRunningTime="2026-01-05 22:13:11.768007506 +0000 UTC m=+2346.724209965" watchObservedRunningTime="2026-01-05 22:13:11.776358964 +0000 UTC m=+2346.732561433" Jan 05 22:13:12 crc kubenswrapper[5000]: I0105 22:13:12.324108 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:13:12 crc kubenswrapper[5000]: E0105 22:13:12.324596 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:13:26 crc kubenswrapper[5000]: I0105 22:13:26.323788 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:13:26 crc kubenswrapper[5000]: E0105 22:13:26.324614 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:13:39 crc kubenswrapper[5000]: I0105 22:13:39.324218 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:13:39 crc kubenswrapper[5000]: E0105 22:13:39.325258 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:13:52 crc kubenswrapper[5000]: I0105 22:13:52.324051 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:13:52 crc kubenswrapper[5000]: E0105 22:13:52.324877 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:14:04 crc kubenswrapper[5000]: I0105 22:14:04.324548 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:14:04 crc kubenswrapper[5000]: E0105 22:14:04.325736 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:14:17 crc kubenswrapper[5000]: I0105 22:14:17.324309 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:14:17 crc kubenswrapper[5000]: E0105 22:14:17.325101 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:14:29 crc kubenswrapper[5000]: I0105 22:14:29.324251 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:14:29 crc kubenswrapper[5000]: E0105 22:14:29.325410 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:14:43 crc kubenswrapper[5000]: I0105 22:14:43.323589 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:14:43 crc kubenswrapper[5000]: E0105 22:14:43.324700 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:14:56 crc kubenswrapper[5000]: I0105 22:14:56.324293 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:14:56 crc kubenswrapper[5000]: E0105 22:14:56.325035 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.157505 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b"] Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.159818 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.162236 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.163319 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.170048 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b"] Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.215363 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.215570 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnkk\" (UniqueName: \"kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.215617 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.317939 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.318020 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdnkk\" (UniqueName: \"kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.318041 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.319166 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.325421 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.336542 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdnkk\" (UniqueName: \"kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk\") pod \"collect-profiles-29460855-gt24b\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.492285 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:00 crc kubenswrapper[5000]: I0105 22:15:00.910288 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b"] Jan 05 22:15:01 crc kubenswrapper[5000]: I0105 22:15:01.833649 5000 generic.go:334] "Generic (PLEG): container finished" podID="d1c79b86-e773-4b6f-ae02-9b80d4e42c50" containerID="a5e765d25340e05159aeac04e5e1ff5eda566bc7ebc9270ea8f6fbbd3c925655" exitCode=0 Jan 05 22:15:01 crc kubenswrapper[5000]: I0105 22:15:01.833708 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" event={"ID":"d1c79b86-e773-4b6f-ae02-9b80d4e42c50","Type":"ContainerDied","Data":"a5e765d25340e05159aeac04e5e1ff5eda566bc7ebc9270ea8f6fbbd3c925655"} Jan 05 22:15:01 crc kubenswrapper[5000]: I0105 22:15:01.834114 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" event={"ID":"d1c79b86-e773-4b6f-ae02-9b80d4e42c50","Type":"ContainerStarted","Data":"99331aac3b162bce2b51fabb0a6ba283890e9c88d333614ea10c3500a93a34b8"} Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.232956 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.271791 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume\") pod \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.271844 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume\") pod \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.272039 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdnkk\" (UniqueName: \"kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk\") pod \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\" (UID: \"d1c79b86-e773-4b6f-ae02-9b80d4e42c50\") " Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.273275 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1c79b86-e773-4b6f-ae02-9b80d4e42c50" (UID: "d1c79b86-e773-4b6f-ae02-9b80d4e42c50"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.277439 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk" (OuterVolumeSpecName: "kube-api-access-sdnkk") pod "d1c79b86-e773-4b6f-ae02-9b80d4e42c50" (UID: "d1c79b86-e773-4b6f-ae02-9b80d4e42c50"). InnerVolumeSpecName "kube-api-access-sdnkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.280444 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1c79b86-e773-4b6f-ae02-9b80d4e42c50" (UID: "d1c79b86-e773-4b6f-ae02-9b80d4e42c50"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.373722 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdnkk\" (UniqueName: \"kubernetes.io/projected/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-kube-api-access-sdnkk\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.373756 5000 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.373765 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c79b86-e773-4b6f-ae02-9b80d4e42c50-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.851631 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" event={"ID":"d1c79b86-e773-4b6f-ae02-9b80d4e42c50","Type":"ContainerDied","Data":"99331aac3b162bce2b51fabb0a6ba283890e9c88d333614ea10c3500a93a34b8"} Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.851932 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99331aac3b162bce2b51fabb0a6ba283890e9c88d333614ea10c3500a93a34b8" Jan 05 22:15:03 crc kubenswrapper[5000]: I0105 22:15:03.851695 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460855-gt24b" Jan 05 22:15:04 crc kubenswrapper[5000]: I0105 22:15:04.312009 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l"] Jan 05 22:15:04 crc kubenswrapper[5000]: I0105 22:15:04.320067 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460810-tr26l"] Jan 05 22:15:05 crc kubenswrapper[5000]: I0105 22:15:05.337764 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77750436-ae8c-4ab3-9647-dfd13c2822c6" path="/var/lib/kubelet/pods/77750436-ae8c-4ab3-9647-dfd13c2822c6/volumes" Jan 05 22:15:07 crc kubenswrapper[5000]: I0105 22:15:07.324772 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:15:07 crc kubenswrapper[5000]: E0105 22:15:07.325436 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:15:20 crc kubenswrapper[5000]: I0105 22:15:20.448447 5000 scope.go:117] "RemoveContainer" containerID="fb6465f66c0cd2329f1b84db157a05851c9f56e3d7d1b965b0ee93bc05230c7c" Jan 05 22:15:22 crc kubenswrapper[5000]: I0105 22:15:22.325491 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:15:22 crc kubenswrapper[5000]: E0105 22:15:22.326172 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:15:35 crc kubenswrapper[5000]: I0105 22:15:35.332398 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:15:35 crc kubenswrapper[5000]: E0105 22:15:35.333352 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:15:40 crc kubenswrapper[5000]: I0105 22:15:40.167996 5000 generic.go:334] "Generic (PLEG): container finished" podID="9457bd68-0fcd-45ee-9625-4a82d4ad181d" containerID="d66271380c8d4ee438e52f01c952b4b71b8abf926cd13d1cf98a781ec682a1a2" exitCode=0 Jan 05 22:15:40 crc kubenswrapper[5000]: I0105 22:15:40.168030 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" event={"ID":"9457bd68-0fcd-45ee-9625-4a82d4ad181d","Type":"ContainerDied","Data":"d66271380c8d4ee438e52f01c952b4b71b8abf926cd13d1cf98a781ec682a1a2"} Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.532150 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734443 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734495 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734554 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734629 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734692 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734874 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.734966 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjn78\" (UniqueName: \"kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78\") pod \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\" (UID: \"9457bd68-0fcd-45ee-9625-4a82d4ad181d\") " Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.758809 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78" (OuterVolumeSpecName: "kube-api-access-sjn78") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "kube-api-access-sjn78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.760165 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.769051 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.775734 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.776603 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.786098 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.801055 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory" (OuterVolumeSpecName: "inventory") pod "9457bd68-0fcd-45ee-9625-4a82d4ad181d" (UID: "9457bd68-0fcd-45ee-9625-4a82d4ad181d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837446 5000 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837481 5000 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837494 5000 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837510 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjn78\" (UniqueName: \"kubernetes.io/projected/9457bd68-0fcd-45ee-9625-4a82d4ad181d-kube-api-access-sjn78\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837522 5000 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-inventory\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837534 5000 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:41 crc kubenswrapper[5000]: I0105 22:15:41.837544 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9457bd68-0fcd-45ee-9625-4a82d4ad181d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:15:42 crc kubenswrapper[5000]: I0105 22:15:42.184027 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" event={"ID":"9457bd68-0fcd-45ee-9625-4a82d4ad181d","Type":"ContainerDied","Data":"ca8fb2ea95a7ab1b8f0fc67f219af0afaab088c1f1cdaaafc6f3a1fa3fee46cf"} Jan 05 22:15:42 crc kubenswrapper[5000]: I0105 22:15:42.184068 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8fb2ea95a7ab1b8f0fc67f219af0afaab088c1f1cdaaafc6f3a1fa3fee46cf" Jan 05 22:15:42 crc kubenswrapper[5000]: I0105 22:15:42.184117 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd" Jan 05 22:15:47 crc kubenswrapper[5000]: I0105 22:15:47.324220 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:15:47 crc kubenswrapper[5000]: E0105 22:15:47.325053 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:16:00 crc kubenswrapper[5000]: I0105 22:16:00.346525 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:16:00 crc kubenswrapper[5000]: E0105 22:16:00.347326 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:16:12 crc kubenswrapper[5000]: I0105 22:16:12.323919 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:16:12 crc kubenswrapper[5000]: E0105 22:16:12.324577 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:16:27 crc kubenswrapper[5000]: I0105 22:16:27.324134 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:16:27 crc kubenswrapper[5000]: I0105 22:16:27.575239 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8"} Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.171350 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 22:16:35 crc kubenswrapper[5000]: E0105 22:16:35.172506 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9457bd68-0fcd-45ee-9625-4a82d4ad181d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.172522 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="9457bd68-0fcd-45ee-9625-4a82d4ad181d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 22:16:35 crc kubenswrapper[5000]: E0105 22:16:35.172553 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c79b86-e773-4b6f-ae02-9b80d4e42c50" containerName="collect-profiles" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.172559 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c79b86-e773-4b6f-ae02-9b80d4e42c50" containerName="collect-profiles" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.172734 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="9457bd68-0fcd-45ee-9625-4a82d4ad181d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.172753 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c79b86-e773-4b6f-ae02-9b80d4e42c50" containerName="collect-profiles" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.173590 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.177418 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.177627 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.177691 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-75lj5" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.177755 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.190834 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340541 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340638 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340672 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340699 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340717 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbdk\" (UniqueName: \"kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340797 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340818 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.340957 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.341142 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.442842 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443384 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443430 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443472 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443503 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbdk\" (UniqueName: \"kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443548 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443576 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443611 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.443681 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.444326 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.444333 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.445158 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.445203 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.445282 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.451397 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.451834 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.452640 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.460841 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lbdk\" (UniqueName: \"kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.473061 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.507384 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.949402 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 05 22:16:35 crc kubenswrapper[5000]: I0105 22:16:35.958975 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:16:36 crc kubenswrapper[5000]: I0105 22:16:36.676202 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"afff7bec-07b5-49b0-9b93-49f90b6c0214","Type":"ContainerStarted","Data":"b7a84a87f7eaf2ce1618dee134240929abe8936832425ebf2bd72858008f9af0"} Jan 05 22:17:03 crc kubenswrapper[5000]: E0105 22:17:03.798320 5000 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 05 22:17:03 crc kubenswrapper[5000]: E0105 22:17:03.798855 5000 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lbdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(afff7bec-07b5-49b0-9b93-49f90b6c0214): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 05 22:17:03 crc kubenswrapper[5000]: E0105 22:17:03.800079 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="afff7bec-07b5-49b0-9b93-49f90b6c0214" Jan 05 22:17:03 crc kubenswrapper[5000]: E0105 22:17:03.917141 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="afff7bec-07b5-49b0-9b93-49f90b6c0214" Jan 05 22:17:18 crc kubenswrapper[5000]: I0105 22:17:18.904444 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 05 22:17:20 crc kubenswrapper[5000]: I0105 22:17:20.069079 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"afff7bec-07b5-49b0-9b93-49f90b6c0214","Type":"ContainerStarted","Data":"f9dceec64fc2d4f6bde5027390c92887ce9abd0a01fff2f68c0406f678f275fb"} Jan 05 22:18:53 crc kubenswrapper[5000]: I0105 22:18:53.099164 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:18:53 crc kubenswrapper[5000]: I0105 22:18:53.099748 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.156065 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=121.212485399 podStartE2EDuration="2m44.156039966s" podCreationTimestamp="2026-01-05 22:16:34 +0000 UTC" firstStartedPulling="2026-01-05 22:16:35.958761093 +0000 UTC m=+2550.914963562" lastFinishedPulling="2026-01-05 22:17:18.90231566 +0000 UTC m=+2593.858518129" observedRunningTime="2026-01-05 22:17:20.103100075 +0000 UTC m=+2595.059302554" watchObservedRunningTime="2026-01-05 22:19:18.156039966 +0000 UTC m=+2713.112242475" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.173321 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.179636 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.194989 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.237955 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k8kb\" (UniqueName: \"kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.238024 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.238064 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.340157 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k8kb\" (UniqueName: \"kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.340230 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.340263 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.340657 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.341156 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.374452 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k8kb\" (UniqueName: \"kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb\") pod \"community-operators-b8wzr\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:18 crc kubenswrapper[5000]: I0105 22:19:18.552490 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:19 crc kubenswrapper[5000]: I0105 22:19:19.126804 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:19 crc kubenswrapper[5000]: W0105 22:19:19.134210 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf711fdac_d96c_423b_9ae3_cdafa4a5fc8a.slice/crio-558377c635c93ad01572eaa382c4047b70fe0f76f35867beab39253851795212 WatchSource:0}: Error finding container 558377c635c93ad01572eaa382c4047b70fe0f76f35867beab39253851795212: Status 404 returned error can't find the container with id 558377c635c93ad01572eaa382c4047b70fe0f76f35867beab39253851795212 Jan 05 22:19:19 crc kubenswrapper[5000]: I0105 22:19:19.302926 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerStarted","Data":"558377c635c93ad01572eaa382c4047b70fe0f76f35867beab39253851795212"} Jan 05 22:19:20 crc kubenswrapper[5000]: I0105 22:19:20.314102 5000 generic.go:334] "Generic (PLEG): container finished" podID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerID="8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3" exitCode=0 Jan 05 22:19:20 crc kubenswrapper[5000]: I0105 22:19:20.314226 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerDied","Data":"8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3"} Jan 05 22:19:21 crc kubenswrapper[5000]: I0105 22:19:21.354414 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerStarted","Data":"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6"} Jan 05 22:19:22 crc kubenswrapper[5000]: I0105 22:19:22.341018 5000 generic.go:334] "Generic (PLEG): container finished" podID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerID="4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6" exitCode=0 Jan 05 22:19:22 crc kubenswrapper[5000]: I0105 22:19:22.341068 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerDied","Data":"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6"} Jan 05 22:19:23 crc kubenswrapper[5000]: I0105 22:19:23.099144 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:19:23 crc kubenswrapper[5000]: I0105 22:19:23.099512 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:19:23 crc kubenswrapper[5000]: I0105 22:19:23.368041 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerStarted","Data":"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461"} Jan 05 22:19:23 crc kubenswrapper[5000]: I0105 22:19:23.391598 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b8wzr" podStartSLOduration=2.921755879 podStartE2EDuration="5.39158184s" podCreationTimestamp="2026-01-05 22:19:18 +0000 UTC" firstStartedPulling="2026-01-05 22:19:20.316697659 +0000 UTC m=+2715.272900168" lastFinishedPulling="2026-01-05 22:19:22.78652366 +0000 UTC m=+2717.742726129" observedRunningTime="2026-01-05 22:19:23.389223603 +0000 UTC m=+2718.345426112" watchObservedRunningTime="2026-01-05 22:19:23.39158184 +0000 UTC m=+2718.347784299" Jan 05 22:19:28 crc kubenswrapper[5000]: I0105 22:19:28.552782 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:28 crc kubenswrapper[5000]: I0105 22:19:28.553555 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:28 crc kubenswrapper[5000]: I0105 22:19:28.613073 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:29 crc kubenswrapper[5000]: I0105 22:19:29.519457 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:29 crc kubenswrapper[5000]: I0105 22:19:29.599603 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:31 crc kubenswrapper[5000]: I0105 22:19:31.450716 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b8wzr" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="registry-server" containerID="cri-o://24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461" gracePeriod=2 Jan 05 22:19:31 crc kubenswrapper[5000]: I0105 22:19:31.955723 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.117257 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content\") pod \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.117644 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities\") pod \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.117704 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k8kb\" (UniqueName: \"kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb\") pod \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\" (UID: \"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a\") " Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.119458 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities" (OuterVolumeSpecName: "utilities") pod "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" (UID: "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.127439 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb" (OuterVolumeSpecName: "kube-api-access-8k8kb") pod "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" (UID: "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a"). InnerVolumeSpecName "kube-api-access-8k8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.162192 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" (UID: "f711fdac-d96c-423b-9ae3-cdafa4a5fc8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.219702 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k8kb\" (UniqueName: \"kubernetes.io/projected/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-kube-api-access-8k8kb\") on node \"crc\" DevicePath \"\"" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.219738 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.219747 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.461347 5000 generic.go:334] "Generic (PLEG): container finished" podID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerID="24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461" exitCode=0 Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.461387 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerDied","Data":"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461"} Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.461412 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8wzr" event={"ID":"f711fdac-d96c-423b-9ae3-cdafa4a5fc8a","Type":"ContainerDied","Data":"558377c635c93ad01572eaa382c4047b70fe0f76f35867beab39253851795212"} Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.461427 5000 scope.go:117] "RemoveContainer" containerID="24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.461538 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8wzr" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.482358 5000 scope.go:117] "RemoveContainer" containerID="4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.506153 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.515181 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b8wzr"] Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.525971 5000 scope.go:117] "RemoveContainer" containerID="8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.577928 5000 scope.go:117] "RemoveContainer" containerID="24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461" Jan 05 22:19:32 crc kubenswrapper[5000]: E0105 22:19:32.578424 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461\": container with ID starting with 24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461 not found: ID does not exist" containerID="24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.578459 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461"} err="failed to get container status \"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461\": rpc error: code = NotFound desc = could not find container \"24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461\": container with ID starting with 24dc9d22e64ffd96af66bf4ce0322150de8f8f772f0271252fdd7918b7155461 not found: ID does not exist" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.578486 5000 scope.go:117] "RemoveContainer" containerID="4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6" Jan 05 22:19:32 crc kubenswrapper[5000]: E0105 22:19:32.578733 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6\": container with ID starting with 4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6 not found: ID does not exist" containerID="4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.578757 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6"} err="failed to get container status \"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6\": rpc error: code = NotFound desc = could not find container \"4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6\": container with ID starting with 4699813025aa6685c30cf6f936b3d96ea4651f938666c56614339e3501d654d6 not found: ID does not exist" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.578773 5000 scope.go:117] "RemoveContainer" containerID="8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3" Jan 05 22:19:32 crc kubenswrapper[5000]: E0105 22:19:32.579024 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3\": container with ID starting with 8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3 not found: ID does not exist" containerID="8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3" Jan 05 22:19:32 crc kubenswrapper[5000]: I0105 22:19:32.579049 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3"} err="failed to get container status \"8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3\": rpc error: code = NotFound desc = could not find container \"8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3\": container with ID starting with 8f1a46d3abf0849b14188a9a1cf345ab221bf0fe33cac420aa990b5d74686fe3 not found: ID does not exist" Jan 05 22:19:33 crc kubenswrapper[5000]: I0105 22:19:33.337197 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" path="/var/lib/kubelet/pods/f711fdac-d96c-423b-9ae3-cdafa4a5fc8a/volumes" Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.099436 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.100204 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.100278 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.101539 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.101640 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8" gracePeriod=600 Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.698610 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8" exitCode=0 Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.698715 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8"} Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.699197 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf"} Jan 05 22:19:53 crc kubenswrapper[5000]: I0105 22:19:53.699232 5000 scope.go:117] "RemoveContainer" containerID="ad8a0d5374733ad09aa98a0a33d57f26f95460a81cc9fd4b7f6eb8d2852f3558" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.150759 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:20:57 crc kubenswrapper[5000]: E0105 22:20:57.151731 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="registry-server" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.151743 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="registry-server" Jan 05 22:20:57 crc kubenswrapper[5000]: E0105 22:20:57.151756 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="extract-utilities" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.151764 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="extract-utilities" Jan 05 22:20:57 crc kubenswrapper[5000]: E0105 22:20:57.151818 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="extract-content" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.151825 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="extract-content" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.151997 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="f711fdac-d96c-423b-9ae3-cdafa4a5fc8a" containerName="registry-server" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.153259 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.175715 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.198498 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pgn\" (UniqueName: \"kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.198576 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.198602 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.300747 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pgn\" (UniqueName: \"kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.301103 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.301135 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.301800 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.301866 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.336506 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pgn\" (UniqueName: \"kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn\") pod \"redhat-operators-bv7zx\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.478668 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:20:57 crc kubenswrapper[5000]: I0105 22:20:57.972059 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:20:58 crc kubenswrapper[5000]: I0105 22:20:58.304875 5000 generic.go:334] "Generic (PLEG): container finished" podID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerID="cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e" exitCode=0 Jan 05 22:20:58 crc kubenswrapper[5000]: I0105 22:20:58.305093 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerDied","Data":"cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e"} Jan 05 22:20:58 crc kubenswrapper[5000]: I0105 22:20:58.305239 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerStarted","Data":"a4c80014abfdb88f9226e1d56c19b39b49c447c54fdf592961178c313aa48615"} Jan 05 22:21:00 crc kubenswrapper[5000]: I0105 22:21:00.346480 5000 generic.go:334] "Generic (PLEG): container finished" podID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerID="75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb" exitCode=0 Jan 05 22:21:00 crc kubenswrapper[5000]: I0105 22:21:00.346629 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerDied","Data":"75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb"} Jan 05 22:21:03 crc kubenswrapper[5000]: I0105 22:21:03.378813 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerStarted","Data":"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3"} Jan 05 22:21:03 crc kubenswrapper[5000]: I0105 22:21:03.408040 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bv7zx" podStartSLOduration=2.48084628 podStartE2EDuration="6.408016216s" podCreationTimestamp="2026-01-05 22:20:57 +0000 UTC" firstStartedPulling="2026-01-05 22:20:58.308293758 +0000 UTC m=+2813.264496227" lastFinishedPulling="2026-01-05 22:21:02.235463644 +0000 UTC m=+2817.191666163" observedRunningTime="2026-01-05 22:21:03.399821843 +0000 UTC m=+2818.356024302" watchObservedRunningTime="2026-01-05 22:21:03.408016216 +0000 UTC m=+2818.364218685" Jan 05 22:21:07 crc kubenswrapper[5000]: I0105 22:21:07.479700 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:07 crc kubenswrapper[5000]: I0105 22:21:07.480377 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:08 crc kubenswrapper[5000]: I0105 22:21:08.538752 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bv7zx" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="registry-server" probeResult="failure" output=< Jan 05 22:21:08 crc kubenswrapper[5000]: timeout: failed to connect service ":50051" within 1s Jan 05 22:21:08 crc kubenswrapper[5000]: > Jan 05 22:21:17 crc kubenswrapper[5000]: I0105 22:21:17.524728 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:17 crc kubenswrapper[5000]: I0105 22:21:17.577522 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:17 crc kubenswrapper[5000]: I0105 22:21:17.772223 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:21:19 crc kubenswrapper[5000]: I0105 22:21:19.542580 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bv7zx" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="registry-server" containerID="cri-o://31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3" gracePeriod=2 Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.012303 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.089744 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content\") pod \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.089812 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pgn\" (UniqueName: \"kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn\") pod \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.089971 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities\") pod \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\" (UID: \"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49\") " Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.090885 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities" (OuterVolumeSpecName: "utilities") pod "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" (UID: "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.095407 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn" (OuterVolumeSpecName: "kube-api-access-l9pgn") pod "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" (UID: "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49"). InnerVolumeSpecName "kube-api-access-l9pgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.192362 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.192669 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pgn\" (UniqueName: \"kubernetes.io/projected/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-kube-api-access-l9pgn\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.211212 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" (UID: "64ce8f3a-3995-4fe4-b7a2-e9a8384bad49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.294178 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.552590 5000 generic.go:334] "Generic (PLEG): container finished" podID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerID="31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3" exitCode=0 Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.552650 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerDied","Data":"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3"} Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.552698 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bv7zx" event={"ID":"64ce8f3a-3995-4fe4-b7a2-e9a8384bad49","Type":"ContainerDied","Data":"a4c80014abfdb88f9226e1d56c19b39b49c447c54fdf592961178c313aa48615"} Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.552733 5000 scope.go:117] "RemoveContainer" containerID="31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.553015 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bv7zx" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.602134 5000 scope.go:117] "RemoveContainer" containerID="75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.612996 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.622196 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bv7zx"] Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.647695 5000 scope.go:117] "RemoveContainer" containerID="cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.672717 5000 scope.go:117] "RemoveContainer" containerID="31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3" Jan 05 22:21:20 crc kubenswrapper[5000]: E0105 22:21:20.673332 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3\": container with ID starting with 31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3 not found: ID does not exist" containerID="31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.673366 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3"} err="failed to get container status \"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3\": rpc error: code = NotFound desc = could not find container \"31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3\": container with ID starting with 31db5f5069723ec6f4e366bfd1795caca3010ef06c951fe03f5ae3bfef0522e3 not found: ID does not exist" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.673392 5000 scope.go:117] "RemoveContainer" containerID="75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb" Jan 05 22:21:20 crc kubenswrapper[5000]: E0105 22:21:20.673741 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb\": container with ID starting with 75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb not found: ID does not exist" containerID="75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.673851 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb"} err="failed to get container status \"75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb\": rpc error: code = NotFound desc = could not find container \"75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb\": container with ID starting with 75f1742e0f934a685bb631747b4226d77834e8233cb6c2a25c74228582de92bb not found: ID does not exist" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.673981 5000 scope.go:117] "RemoveContainer" containerID="cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e" Jan 05 22:21:20 crc kubenswrapper[5000]: E0105 22:21:20.674460 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e\": container with ID starting with cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e not found: ID does not exist" containerID="cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e" Jan 05 22:21:20 crc kubenswrapper[5000]: I0105 22:21:20.674493 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e"} err="failed to get container status \"cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e\": rpc error: code = NotFound desc = could not find container \"cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e\": container with ID starting with cf13921c9aa21cd92d7e9654cc6e003ea4bd1e744a6d142d2752f3d64c94ba6e not found: ID does not exist" Jan 05 22:21:21 crc kubenswrapper[5000]: I0105 22:21:21.335367 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" path="/var/lib/kubelet/pods/64ce8f3a-3995-4fe4-b7a2-e9a8384bad49/volumes" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.699579 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:29 crc kubenswrapper[5000]: E0105 22:21:29.700688 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="extract-content" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.700704 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="extract-content" Jan 05 22:21:29 crc kubenswrapper[5000]: E0105 22:21:29.700714 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="registry-server" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.700720 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="registry-server" Jan 05 22:21:29 crc kubenswrapper[5000]: E0105 22:21:29.700739 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="extract-utilities" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.700746 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="extract-utilities" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.700972 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ce8f3a-3995-4fe4-b7a2-e9a8384bad49" containerName="registry-server" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.702597 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.722202 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.774357 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.774434 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.774499 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvv6\" (UniqueName: \"kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.876129 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkvv6\" (UniqueName: \"kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.876264 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.876308 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.876747 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.877238 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:29 crc kubenswrapper[5000]: I0105 22:21:29.895503 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkvv6\" (UniqueName: \"kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6\") pod \"redhat-marketplace-wsd5p\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:30 crc kubenswrapper[5000]: I0105 22:21:30.022976 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:30 crc kubenswrapper[5000]: I0105 22:21:30.515248 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:30 crc kubenswrapper[5000]: I0105 22:21:30.645731 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerStarted","Data":"677206ef5e63b695ef5e7d16b37c477e5169165c3d8306fc4af03d3cd6e88ac0"} Jan 05 22:21:31 crc kubenswrapper[5000]: I0105 22:21:31.657340 5000 generic.go:334] "Generic (PLEG): container finished" podID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerID="688612de16aacdc08e736e78b1795eae88c8e1aed99869b0560713e34af8ecc9" exitCode=0 Jan 05 22:21:31 crc kubenswrapper[5000]: I0105 22:21:31.657454 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerDied","Data":"688612de16aacdc08e736e78b1795eae88c8e1aed99869b0560713e34af8ecc9"} Jan 05 22:21:32 crc kubenswrapper[5000]: I0105 22:21:32.668422 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerStarted","Data":"2c9f67bfd53856d534bfd3640295a209292a2bf225d791a45b01c29c71f697fb"} Jan 05 22:21:33 crc kubenswrapper[5000]: I0105 22:21:33.677873 5000 generic.go:334] "Generic (PLEG): container finished" podID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerID="2c9f67bfd53856d534bfd3640295a209292a2bf225d791a45b01c29c71f697fb" exitCode=0 Jan 05 22:21:33 crc kubenswrapper[5000]: I0105 22:21:33.677936 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerDied","Data":"2c9f67bfd53856d534bfd3640295a209292a2bf225d791a45b01c29c71f697fb"} Jan 05 22:21:34 crc kubenswrapper[5000]: I0105 22:21:34.688469 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerStarted","Data":"e7c0a1118169e8fd28d1260b917397408128cb28f4d9093fbec6665d2d4d3a84"} Jan 05 22:21:34 crc kubenswrapper[5000]: I0105 22:21:34.711430 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wsd5p" podStartSLOduration=3.230083479 podStartE2EDuration="5.711414794s" podCreationTimestamp="2026-01-05 22:21:29 +0000 UTC" firstStartedPulling="2026-01-05 22:21:31.674848306 +0000 UTC m=+2846.631050785" lastFinishedPulling="2026-01-05 22:21:34.156179631 +0000 UTC m=+2849.112382100" observedRunningTime="2026-01-05 22:21:34.706366191 +0000 UTC m=+2849.662568660" watchObservedRunningTime="2026-01-05 22:21:34.711414794 +0000 UTC m=+2849.667617253" Jan 05 22:21:40 crc kubenswrapper[5000]: I0105 22:21:40.028280 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:40 crc kubenswrapper[5000]: I0105 22:21:40.028812 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:40 crc kubenswrapper[5000]: I0105 22:21:40.073409 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:40 crc kubenswrapper[5000]: I0105 22:21:40.779350 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:42 crc kubenswrapper[5000]: I0105 22:21:42.494291 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:42 crc kubenswrapper[5000]: I0105 22:21:42.751603 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wsd5p" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="registry-server" containerID="cri-o://e7c0a1118169e8fd28d1260b917397408128cb28f4d9093fbec6665d2d4d3a84" gracePeriod=2 Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.774771 5000 generic.go:334] "Generic (PLEG): container finished" podID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerID="e7c0a1118169e8fd28d1260b917397408128cb28f4d9093fbec6665d2d4d3a84" exitCode=0 Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.774949 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerDied","Data":"e7c0a1118169e8fd28d1260b917397408128cb28f4d9093fbec6665d2d4d3a84"} Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.775327 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsd5p" event={"ID":"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86","Type":"ContainerDied","Data":"677206ef5e63b695ef5e7d16b37c477e5169165c3d8306fc4af03d3cd6e88ac0"} Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.775352 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677206ef5e63b695ef5e7d16b37c477e5169165c3d8306fc4af03d3cd6e88ac0" Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.779047 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.974007 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities\") pod \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.974112 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkvv6\" (UniqueName: \"kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6\") pod \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.974180 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content\") pod \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\" (UID: \"49075e8e-f9fb-4ffc-ac00-b2b9595d5c86\") " Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.975073 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities" (OuterVolumeSpecName: "utilities") pod "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" (UID: "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:21:43 crc kubenswrapper[5000]: I0105 22:21:43.979262 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6" (OuterVolumeSpecName: "kube-api-access-kkvv6") pod "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" (UID: "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86"). InnerVolumeSpecName "kube-api-access-kkvv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.000305 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" (UID: "49075e8e-f9fb-4ffc-ac00-b2b9595d5c86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.076489 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.076526 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkvv6\" (UniqueName: \"kubernetes.io/projected/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-kube-api-access-kkvv6\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.076544 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.783434 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsd5p" Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.816030 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:44 crc kubenswrapper[5000]: I0105 22:21:44.833691 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsd5p"] Jan 05 22:21:45 crc kubenswrapper[5000]: I0105 22:21:45.334535 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" path="/var/lib/kubelet/pods/49075e8e-f9fb-4ffc-ac00-b2b9595d5c86/volumes" Jan 05 22:21:53 crc kubenswrapper[5000]: I0105 22:21:53.099330 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:21:53 crc kubenswrapper[5000]: I0105 22:21:53.100605 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.055349 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:05 crc kubenswrapper[5000]: E0105 22:22:05.056971 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="registry-server" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.057006 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="registry-server" Jan 05 22:22:05 crc kubenswrapper[5000]: E0105 22:22:05.057023 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="extract-utilities" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.057030 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="extract-utilities" Jan 05 22:22:05 crc kubenswrapper[5000]: E0105 22:22:05.057060 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="extract-content" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.057066 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="extract-content" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.057447 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="49075e8e-f9fb-4ffc-ac00-b2b9595d5c86" containerName="registry-server" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.058864 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.069823 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.241206 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.241855 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.242062 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mg87\" (UniqueName: \"kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.344126 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.344176 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.344301 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mg87\" (UniqueName: \"kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.345494 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.346314 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.384011 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mg87\" (UniqueName: \"kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87\") pod \"certified-operators-5bjwd\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:05 crc kubenswrapper[5000]: I0105 22:22:05.681507 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:06 crc kubenswrapper[5000]: I0105 22:22:06.255397 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:06 crc kubenswrapper[5000]: I0105 22:22:06.976092 5000 generic.go:334] "Generic (PLEG): container finished" podID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerID="573b76779a5358dedde317807a142c925606a74a3126f726560875af6b586663" exitCode=0 Jan 05 22:22:06 crc kubenswrapper[5000]: I0105 22:22:06.976196 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerDied","Data":"573b76779a5358dedde317807a142c925606a74a3126f726560875af6b586663"} Jan 05 22:22:06 crc kubenswrapper[5000]: I0105 22:22:06.976533 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerStarted","Data":"186b574cab257910f72fda66e434a457896cb9392090024001f48e66b64a7e57"} Jan 05 22:22:06 crc kubenswrapper[5000]: I0105 22:22:06.978341 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:22:07 crc kubenswrapper[5000]: I0105 22:22:07.988889 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerStarted","Data":"37fa125e9e097bd5b0a1ce5d2aab9ae49dd49dcdc995454ed8c162487e68a119"} Jan 05 22:22:09 crc kubenswrapper[5000]: I0105 22:22:09.001300 5000 generic.go:334] "Generic (PLEG): container finished" podID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerID="37fa125e9e097bd5b0a1ce5d2aab9ae49dd49dcdc995454ed8c162487e68a119" exitCode=0 Jan 05 22:22:09 crc kubenswrapper[5000]: I0105 22:22:09.001440 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerDied","Data":"37fa125e9e097bd5b0a1ce5d2aab9ae49dd49dcdc995454ed8c162487e68a119"} Jan 05 22:22:10 crc kubenswrapper[5000]: I0105 22:22:10.013077 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerStarted","Data":"2418ff43bdfed58503948d2254e62d30df6c23dca3c3aadbe5bb3127e6805524"} Jan 05 22:22:10 crc kubenswrapper[5000]: I0105 22:22:10.027243 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5bjwd" podStartSLOduration=2.515515311 podStartE2EDuration="5.027214011s" podCreationTimestamp="2026-01-05 22:22:05 +0000 UTC" firstStartedPulling="2026-01-05 22:22:06.978142366 +0000 UTC m=+2881.934344835" lastFinishedPulling="2026-01-05 22:22:09.489841066 +0000 UTC m=+2884.446043535" observedRunningTime="2026-01-05 22:22:10.026439769 +0000 UTC m=+2884.982642238" watchObservedRunningTime="2026-01-05 22:22:10.027214011 +0000 UTC m=+2884.983416510" Jan 05 22:22:15 crc kubenswrapper[5000]: I0105 22:22:15.682091 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:15 crc kubenswrapper[5000]: I0105 22:22:15.682700 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:15 crc kubenswrapper[5000]: I0105 22:22:15.727028 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:16 crc kubenswrapper[5000]: I0105 22:22:16.115181 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:16 crc kubenswrapper[5000]: I0105 22:22:16.160383 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:18 crc kubenswrapper[5000]: I0105 22:22:18.086192 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5bjwd" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="registry-server" containerID="cri-o://2418ff43bdfed58503948d2254e62d30df6c23dca3c3aadbe5bb3127e6805524" gracePeriod=2 Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.096201 5000 generic.go:334] "Generic (PLEG): container finished" podID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerID="2418ff43bdfed58503948d2254e62d30df6c23dca3c3aadbe5bb3127e6805524" exitCode=0 Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.096251 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerDied","Data":"2418ff43bdfed58503948d2254e62d30df6c23dca3c3aadbe5bb3127e6805524"} Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.820954 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.943533 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content\") pod \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.943678 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mg87\" (UniqueName: \"kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87\") pod \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.943724 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities\") pod \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\" (UID: \"92a21a2b-87c7-46e9-b481-b44eb0f091a7\") " Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.944575 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities" (OuterVolumeSpecName: "utilities") pod "92a21a2b-87c7-46e9-b481-b44eb0f091a7" (UID: "92a21a2b-87c7-46e9-b481-b44eb0f091a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:22:19 crc kubenswrapper[5000]: I0105 22:22:19.949457 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87" (OuterVolumeSpecName: "kube-api-access-6mg87") pod "92a21a2b-87c7-46e9-b481-b44eb0f091a7" (UID: "92a21a2b-87c7-46e9-b481-b44eb0f091a7"). InnerVolumeSpecName "kube-api-access-6mg87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.007465 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92a21a2b-87c7-46e9-b481-b44eb0f091a7" (UID: "92a21a2b-87c7-46e9-b481-b44eb0f091a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.045348 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mg87\" (UniqueName: \"kubernetes.io/projected/92a21a2b-87c7-46e9-b481-b44eb0f091a7-kube-api-access-6mg87\") on node \"crc\" DevicePath \"\"" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.045385 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.045394 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a21a2b-87c7-46e9-b481-b44eb0f091a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.107397 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5bjwd" event={"ID":"92a21a2b-87c7-46e9-b481-b44eb0f091a7","Type":"ContainerDied","Data":"186b574cab257910f72fda66e434a457896cb9392090024001f48e66b64a7e57"} Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.107449 5000 scope.go:117] "RemoveContainer" containerID="2418ff43bdfed58503948d2254e62d30df6c23dca3c3aadbe5bb3127e6805524" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.107560 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5bjwd" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.143931 5000 scope.go:117] "RemoveContainer" containerID="37fa125e9e097bd5b0a1ce5d2aab9ae49dd49dcdc995454ed8c162487e68a119" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.176671 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.177778 5000 scope.go:117] "RemoveContainer" containerID="573b76779a5358dedde317807a142c925606a74a3126f726560875af6b586663" Jan 05 22:22:20 crc kubenswrapper[5000]: I0105 22:22:20.191425 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5bjwd"] Jan 05 22:22:21 crc kubenswrapper[5000]: I0105 22:22:21.333652 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" path="/var/lib/kubelet/pods/92a21a2b-87c7-46e9-b481-b44eb0f091a7/volumes" Jan 05 22:22:23 crc kubenswrapper[5000]: I0105 22:22:23.098388 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:22:23 crc kubenswrapper[5000]: I0105 22:22:23.098685 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.099451 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.101301 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.101405 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.102660 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.102815 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" gracePeriod=600 Jan 05 22:22:53 crc kubenswrapper[5000]: E0105 22:22:53.232089 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.441611 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" exitCode=0 Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.441654 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf"} Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.442056 5000 scope.go:117] "RemoveContainer" containerID="700238fe98bd1f925d83dd7adfd4a558c16c1e9ffab9d6af7c59cd17a9a072f8" Jan 05 22:22:53 crc kubenswrapper[5000]: I0105 22:22:53.442772 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:22:53 crc kubenswrapper[5000]: E0105 22:22:53.443456 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:23:09 crc kubenswrapper[5000]: I0105 22:23:09.324193 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:23:09 crc kubenswrapper[5000]: E0105 22:23:09.325240 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:23:23 crc kubenswrapper[5000]: I0105 22:23:23.324260 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:23:23 crc kubenswrapper[5000]: E0105 22:23:23.324927 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:23:36 crc kubenswrapper[5000]: I0105 22:23:36.324001 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:23:36 crc kubenswrapper[5000]: E0105 22:23:36.324699 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:23:47 crc kubenswrapper[5000]: I0105 22:23:47.324266 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:23:47 crc kubenswrapper[5000]: E0105 22:23:47.325028 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:24:01 crc kubenswrapper[5000]: I0105 22:24:01.324926 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:24:01 crc kubenswrapper[5000]: E0105 22:24:01.329095 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:24:12 crc kubenswrapper[5000]: I0105 22:24:12.323919 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:24:12 crc kubenswrapper[5000]: E0105 22:24:12.324779 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:24:23 crc kubenswrapper[5000]: I0105 22:24:23.324083 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:24:23 crc kubenswrapper[5000]: E0105 22:24:23.326500 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:24:35 crc kubenswrapper[5000]: I0105 22:24:35.346236 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:24:35 crc kubenswrapper[5000]: E0105 22:24:35.347287 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:24:48 crc kubenswrapper[5000]: I0105 22:24:48.324214 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:24:48 crc kubenswrapper[5000]: E0105 22:24:48.325300 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:25:03 crc kubenswrapper[5000]: I0105 22:25:03.323944 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:25:03 crc kubenswrapper[5000]: E0105 22:25:03.325008 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:25:17 crc kubenswrapper[5000]: I0105 22:25:17.324001 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:25:17 crc kubenswrapper[5000]: E0105 22:25:17.325120 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:25:30 crc kubenswrapper[5000]: I0105 22:25:30.324045 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:25:30 crc kubenswrapper[5000]: E0105 22:25:30.324814 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:25:42 crc kubenswrapper[5000]: I0105 22:25:42.324128 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:25:42 crc kubenswrapper[5000]: E0105 22:25:42.324923 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:25:55 crc kubenswrapper[5000]: I0105 22:25:55.329607 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:25:55 crc kubenswrapper[5000]: E0105 22:25:55.330375 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:26:06 crc kubenswrapper[5000]: I0105 22:26:06.324204 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:26:06 crc kubenswrapper[5000]: E0105 22:26:06.325136 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:26:17 crc kubenswrapper[5000]: I0105 22:26:17.325511 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:26:17 crc kubenswrapper[5000]: E0105 22:26:17.326521 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:26:28 crc kubenswrapper[5000]: I0105 22:26:28.324714 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:26:28 crc kubenswrapper[5000]: E0105 22:26:28.325529 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:26:43 crc kubenswrapper[5000]: I0105 22:26:43.323519 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:26:43 crc kubenswrapper[5000]: E0105 22:26:43.324391 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:26:54 crc kubenswrapper[5000]: I0105 22:26:54.324812 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:26:54 crc kubenswrapper[5000]: E0105 22:26:54.326165 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:27:09 crc kubenswrapper[5000]: I0105 22:27:09.324713 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:27:09 crc kubenswrapper[5000]: E0105 22:27:09.325731 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:27:20 crc kubenswrapper[5000]: I0105 22:27:20.326764 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:27:20 crc kubenswrapper[5000]: E0105 22:27:20.328080 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:27:35 crc kubenswrapper[5000]: I0105 22:27:35.337444 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:27:35 crc kubenswrapper[5000]: E0105 22:27:35.338611 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:27:46 crc kubenswrapper[5000]: I0105 22:27:46.200662 5000 generic.go:334] "Generic (PLEG): container finished" podID="afff7bec-07b5-49b0-9b93-49f90b6c0214" containerID="f9dceec64fc2d4f6bde5027390c92887ce9abd0a01fff2f68c0406f678f275fb" exitCode=0 Jan 05 22:27:46 crc kubenswrapper[5000]: I0105 22:27:46.200756 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"afff7bec-07b5-49b0-9b93-49f90b6c0214","Type":"ContainerDied","Data":"f9dceec64fc2d4f6bde5027390c92887ce9abd0a01fff2f68c0406f678f275fb"} Jan 05 22:27:46 crc kubenswrapper[5000]: I0105 22:27:46.324073 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:27:46 crc kubenswrapper[5000]: E0105 22:27:46.324475 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.590088 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.653877 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.653988 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654054 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654099 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654184 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654230 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654302 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654365 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lbdk\" (UniqueName: \"kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.654391 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"afff7bec-07b5-49b0-9b93-49f90b6c0214\" (UID: \"afff7bec-07b5-49b0-9b93-49f90b6c0214\") " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.655089 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.655186 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data" (OuterVolumeSpecName: "config-data") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.661713 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.662714 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.668317 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk" (OuterVolumeSpecName: "kube-api-access-2lbdk") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "kube-api-access-2lbdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.684052 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.686457 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.687722 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.704442 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "afff7bec-07b5-49b0-9b93-49f90b6c0214" (UID: "afff7bec-07b5-49b0-9b93-49f90b6c0214"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756266 5000 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756340 5000 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/afff7bec-07b5-49b0-9b93-49f90b6c0214-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756351 5000 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756361 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lbdk\" (UniqueName: \"kubernetes.io/projected/afff7bec-07b5-49b0-9b93-49f90b6c0214-kube-api-access-2lbdk\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756395 5000 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756405 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756414 5000 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/afff7bec-07b5-49b0-9b93-49f90b6c0214-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756422 5000 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-config-data\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.756434 5000 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/afff7bec-07b5-49b0-9b93-49f90b6c0214-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.776191 5000 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 05 22:27:47 crc kubenswrapper[5000]: I0105 22:27:47.858063 5000 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 05 22:27:48 crc kubenswrapper[5000]: I0105 22:27:48.216841 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"afff7bec-07b5-49b0-9b93-49f90b6c0214","Type":"ContainerDied","Data":"b7a84a87f7eaf2ce1618dee134240929abe8936832425ebf2bd72858008f9af0"} Jan 05 22:27:48 crc kubenswrapper[5000]: I0105 22:27:48.217291 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a84a87f7eaf2ce1618dee134240929abe8936832425ebf2bd72858008f9af0" Jan 05 22:27:48 crc kubenswrapper[5000]: I0105 22:27:48.217217 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.476400 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 22:27:54 crc kubenswrapper[5000]: E0105 22:27:54.478266 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afff7bec-07b5-49b0-9b93-49f90b6c0214" containerName="tempest-tests-tempest-tests-runner" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.478365 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="afff7bec-07b5-49b0-9b93-49f90b6c0214" containerName="tempest-tests-tempest-tests-runner" Jan 05 22:27:54 crc kubenswrapper[5000]: E0105 22:27:54.478433 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="extract-content" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.478491 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="extract-content" Jan 05 22:27:54 crc kubenswrapper[5000]: E0105 22:27:54.478559 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="extract-utilities" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.478619 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="extract-utilities" Jan 05 22:27:54 crc kubenswrapper[5000]: E0105 22:27:54.478687 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="registry-server" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.478741 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="registry-server" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.479001 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a21a2b-87c7-46e9-b481-b44eb0f091a7" containerName="registry-server" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.479121 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="afff7bec-07b5-49b0-9b93-49f90b6c0214" containerName="tempest-tests-tempest-tests-runner" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.479763 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.483384 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-75lj5" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.496587 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.624215 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5hp\" (UniqueName: \"kubernetes.io/projected/6b25987a-4797-4b1a-be62-fef207e3aadc-kube-api-access-cq5hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.624272 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.726432 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5hp\" (UniqueName: \"kubernetes.io/projected/6b25987a-4797-4b1a-be62-fef207e3aadc-kube-api-access-cq5hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.726482 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.727016 5000 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.744412 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5hp\" (UniqueName: \"kubernetes.io/projected/6b25987a-4797-4b1a-be62-fef207e3aadc-kube-api-access-cq5hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.750847 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6b25987a-4797-4b1a-be62-fef207e3aadc\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:54 crc kubenswrapper[5000]: I0105 22:27:54.806946 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 05 22:27:55 crc kubenswrapper[5000]: I0105 22:27:55.224857 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 05 22:27:55 crc kubenswrapper[5000]: I0105 22:27:55.231748 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:27:55 crc kubenswrapper[5000]: I0105 22:27:55.267827 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6b25987a-4797-4b1a-be62-fef207e3aadc","Type":"ContainerStarted","Data":"8f679fa854ffffaf40e623431b65c3012f787212918860bc5e05d04679a05d11"} Jan 05 22:27:57 crc kubenswrapper[5000]: I0105 22:27:57.298301 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6b25987a-4797-4b1a-be62-fef207e3aadc","Type":"ContainerStarted","Data":"5ab20e51224f6a366b28adcdf8f081bfaf21fed2dad576d9669beb155f7d3190"} Jan 05 22:27:59 crc kubenswrapper[5000]: I0105 22:27:59.324875 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:28:00 crc kubenswrapper[5000]: I0105 22:28:00.331248 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba"} Jan 05 22:28:00 crc kubenswrapper[5000]: I0105 22:28:00.356057 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=5.003653339 podStartE2EDuration="6.356035381s" podCreationTimestamp="2026-01-05 22:27:54 +0000 UTC" firstStartedPulling="2026-01-05 22:27:55.231493941 +0000 UTC m=+3230.187696420" lastFinishedPulling="2026-01-05 22:27:56.583875993 +0000 UTC m=+3231.540078462" observedRunningTime="2026-01-05 22:27:57.322675098 +0000 UTC m=+3232.278877567" watchObservedRunningTime="2026-01-05 22:28:00.356035381 +0000 UTC m=+3235.312237850" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.436378 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cj244/must-gather-wwfz5"] Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.438478 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.447843 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cj244/must-gather-wwfz5"] Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.454323 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cj244"/"default-dockercfg-22ph5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.454394 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cj244"/"openshift-service-ca.crt" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.454495 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cj244"/"kube-root-ca.crt" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.639008 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfkfz\" (UniqueName: \"kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.639105 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.740695 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfkfz\" (UniqueName: \"kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.740770 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.741214 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.760524 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfkfz\" (UniqueName: \"kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz\") pod \"must-gather-wwfz5\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:19 crc kubenswrapper[5000]: I0105 22:28:19.772065 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:28:20 crc kubenswrapper[5000]: I0105 22:28:20.333860 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cj244/must-gather-wwfz5"] Jan 05 22:28:20 crc kubenswrapper[5000]: W0105 22:28:20.339769 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b84943b_bd96_47dc_94cc_b5e19a994d33.slice/crio-c1a838a199097efdffcfc2489c26255323ef7c162f569e9cbd5d5957ab87d1a6 WatchSource:0}: Error finding container c1a838a199097efdffcfc2489c26255323ef7c162f569e9cbd5d5957ab87d1a6: Status 404 returned error can't find the container with id c1a838a199097efdffcfc2489c26255323ef7c162f569e9cbd5d5957ab87d1a6 Jan 05 22:28:20 crc kubenswrapper[5000]: I0105 22:28:20.546702 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/must-gather-wwfz5" event={"ID":"7b84943b-bd96-47dc-94cc-b5e19a994d33","Type":"ContainerStarted","Data":"c1a838a199097efdffcfc2489c26255323ef7c162f569e9cbd5d5957ab87d1a6"} Jan 05 22:28:20 crc kubenswrapper[5000]: I0105 22:28:20.836998 5000 scope.go:117] "RemoveContainer" containerID="e7c0a1118169e8fd28d1260b917397408128cb28f4d9093fbec6665d2d4d3a84" Jan 05 22:28:20 crc kubenswrapper[5000]: I0105 22:28:20.861410 5000 scope.go:117] "RemoveContainer" containerID="2c9f67bfd53856d534bfd3640295a209292a2bf225d791a45b01c29c71f697fb" Jan 05 22:28:20 crc kubenswrapper[5000]: I0105 22:28:20.882776 5000 scope.go:117] "RemoveContainer" containerID="688612de16aacdc08e736e78b1795eae88c8e1aed99869b0560713e34af8ecc9" Jan 05 22:28:28 crc kubenswrapper[5000]: I0105 22:28:28.638360 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/must-gather-wwfz5" event={"ID":"7b84943b-bd96-47dc-94cc-b5e19a994d33","Type":"ContainerStarted","Data":"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b"} Jan 05 22:28:29 crc kubenswrapper[5000]: I0105 22:28:29.652577 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/must-gather-wwfz5" event={"ID":"7b84943b-bd96-47dc-94cc-b5e19a994d33","Type":"ContainerStarted","Data":"c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1"} Jan 05 22:28:29 crc kubenswrapper[5000]: I0105 22:28:29.675100 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cj244/must-gather-wwfz5" podStartSLOduration=2.665088701 podStartE2EDuration="10.675080868s" podCreationTimestamp="2026-01-05 22:28:19 +0000 UTC" firstStartedPulling="2026-01-05 22:28:20.343732351 +0000 UTC m=+3255.299934820" lastFinishedPulling="2026-01-05 22:28:28.353724528 +0000 UTC m=+3263.309926987" observedRunningTime="2026-01-05 22:28:29.667213204 +0000 UTC m=+3264.623415663" watchObservedRunningTime="2026-01-05 22:28:29.675080868 +0000 UTC m=+3264.631283337" Jan 05 22:28:31 crc kubenswrapper[5000]: I0105 22:28:31.995926 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cj244/crc-debug-l4qvt"] Jan 05 22:28:31 crc kubenswrapper[5000]: I0105 22:28:31.997480 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.016317 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.016371 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcp4j\" (UniqueName: \"kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.118091 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.118164 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcp4j\" (UniqueName: \"kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.118205 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.149781 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcp4j\" (UniqueName: \"kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j\") pod \"crc-debug-l4qvt\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.318105 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:28:32 crc kubenswrapper[5000]: W0105 22:28:32.349627 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb622f630_b396_47ee_ae0f_35e69b98ffe6.slice/crio-39973d90595d5ddce3cd05ed025a916f334bbf2b949ab78f425bd08ccf8f3841 WatchSource:0}: Error finding container 39973d90595d5ddce3cd05ed025a916f334bbf2b949ab78f425bd08ccf8f3841: Status 404 returned error can't find the container with id 39973d90595d5ddce3cd05ed025a916f334bbf2b949ab78f425bd08ccf8f3841 Jan 05 22:28:32 crc kubenswrapper[5000]: I0105 22:28:32.681955 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-l4qvt" event={"ID":"b622f630-b396-47ee-ae0f-35e69b98ffe6","Type":"ContainerStarted","Data":"39973d90595d5ddce3cd05ed025a916f334bbf2b949ab78f425bd08ccf8f3841"} Jan 05 22:28:45 crc kubenswrapper[5000]: I0105 22:28:45.806496 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-l4qvt" event={"ID":"b622f630-b396-47ee-ae0f-35e69b98ffe6","Type":"ContainerStarted","Data":"d8316623c2638ac607d0ccdc16d59b38a39ac4d2be24b6693565543cb1f30d63"} Jan 05 22:28:45 crc kubenswrapper[5000]: I0105 22:28:45.827742 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cj244/crc-debug-l4qvt" podStartSLOduration=2.191499458 podStartE2EDuration="14.827720673s" podCreationTimestamp="2026-01-05 22:28:31 +0000 UTC" firstStartedPulling="2026-01-05 22:28:32.35724072 +0000 UTC m=+3267.313443199" lastFinishedPulling="2026-01-05 22:28:44.993461945 +0000 UTC m=+3279.949664414" observedRunningTime="2026-01-05 22:28:45.82376778 +0000 UTC m=+3280.779970269" watchObservedRunningTime="2026-01-05 22:28:45.827720673 +0000 UTC m=+3280.783923132" Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.872633 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.875083 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.895399 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.940726 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.940852 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:20 crc kubenswrapper[5000]: I0105 22:29:20.940933 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khp46\" (UniqueName: \"kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.042733 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.043166 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khp46\" (UniqueName: \"kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.043285 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.043685 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.043726 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.065044 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khp46\" (UniqueName: \"kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46\") pod \"community-operators-rb5rh\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.193024 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:21 crc kubenswrapper[5000]: I0105 22:29:21.723766 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:22 crc kubenswrapper[5000]: I0105 22:29:22.121430 5000 generic.go:334] "Generic (PLEG): container finished" podID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerID="4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd" exitCode=0 Jan 05 22:29:22 crc kubenswrapper[5000]: I0105 22:29:22.121522 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerDied","Data":"4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd"} Jan 05 22:29:22 crc kubenswrapper[5000]: I0105 22:29:22.121777 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerStarted","Data":"fd690ff15877aaaed85a691056992aea2263306c6d0cb98796cd9d3aca70ad91"} Jan 05 22:29:23 crc kubenswrapper[5000]: I0105 22:29:23.131629 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerStarted","Data":"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92"} Jan 05 22:29:24 crc kubenswrapper[5000]: I0105 22:29:24.140539 5000 generic.go:334] "Generic (PLEG): container finished" podID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerID="cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92" exitCode=0 Jan 05 22:29:24 crc kubenswrapper[5000]: I0105 22:29:24.140626 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerDied","Data":"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92"} Jan 05 22:29:25 crc kubenswrapper[5000]: I0105 22:29:25.151886 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerStarted","Data":"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d"} Jan 05 22:29:25 crc kubenswrapper[5000]: I0105 22:29:25.174982 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rb5rh" podStartSLOduration=2.722125977 podStartE2EDuration="5.174955031s" podCreationTimestamp="2026-01-05 22:29:20 +0000 UTC" firstStartedPulling="2026-01-05 22:29:22.123248498 +0000 UTC m=+3317.079450967" lastFinishedPulling="2026-01-05 22:29:24.576077542 +0000 UTC m=+3319.532280021" observedRunningTime="2026-01-05 22:29:25.167753966 +0000 UTC m=+3320.123956475" watchObservedRunningTime="2026-01-05 22:29:25.174955031 +0000 UTC m=+3320.131157520" Jan 05 22:29:28 crc kubenswrapper[5000]: I0105 22:29:28.180667 5000 generic.go:334] "Generic (PLEG): container finished" podID="b622f630-b396-47ee-ae0f-35e69b98ffe6" containerID="d8316623c2638ac607d0ccdc16d59b38a39ac4d2be24b6693565543cb1f30d63" exitCode=0 Jan 05 22:29:28 crc kubenswrapper[5000]: I0105 22:29:28.180731 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-l4qvt" event={"ID":"b622f630-b396-47ee-ae0f-35e69b98ffe6","Type":"ContainerDied","Data":"d8316623c2638ac607d0ccdc16d59b38a39ac4d2be24b6693565543cb1f30d63"} Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.287342 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.339711 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cj244/crc-debug-l4qvt"] Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.339755 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cj244/crc-debug-l4qvt"] Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.389638 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host\") pod \"b622f630-b396-47ee-ae0f-35e69b98ffe6\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.389718 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcp4j\" (UniqueName: \"kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j\") pod \"b622f630-b396-47ee-ae0f-35e69b98ffe6\" (UID: \"b622f630-b396-47ee-ae0f-35e69b98ffe6\") " Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.389782 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host" (OuterVolumeSpecName: "host") pod "b622f630-b396-47ee-ae0f-35e69b98ffe6" (UID: "b622f630-b396-47ee-ae0f-35e69b98ffe6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.390489 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b622f630-b396-47ee-ae0f-35e69b98ffe6-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.399846 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j" (OuterVolumeSpecName: "kube-api-access-mcp4j") pod "b622f630-b396-47ee-ae0f-35e69b98ffe6" (UID: "b622f630-b396-47ee-ae0f-35e69b98ffe6"). InnerVolumeSpecName "kube-api-access-mcp4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:29:29 crc kubenswrapper[5000]: I0105 22:29:29.492771 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcp4j\" (UniqueName: \"kubernetes.io/projected/b622f630-b396-47ee-ae0f-35e69b98ffe6-kube-api-access-mcp4j\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.200466 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39973d90595d5ddce3cd05ed025a916f334bbf2b949ab78f425bd08ccf8f3841" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.200505 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-l4qvt" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.489811 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cj244/crc-debug-5jll8"] Jan 05 22:29:30 crc kubenswrapper[5000]: E0105 22:29:30.490494 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b622f630-b396-47ee-ae0f-35e69b98ffe6" containerName="container-00" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.490508 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="b622f630-b396-47ee-ae0f-35e69b98ffe6" containerName="container-00" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.490714 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="b622f630-b396-47ee-ae0f-35e69b98ffe6" containerName="container-00" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.491307 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.511928 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsnv\" (UniqueName: \"kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.512056 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.613474 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.613607 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.613846 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsnv\" (UniqueName: \"kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.630934 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsnv\" (UniqueName: \"kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv\") pod \"crc-debug-5jll8\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:30 crc kubenswrapper[5000]: I0105 22:29:30.808200 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.193427 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.193730 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.212035 5000 generic.go:334] "Generic (PLEG): container finished" podID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" containerID="7f81016fafc1c96e44e4d095033c93456ad0b8c1daf8ad02ae854b67beab9cc4" exitCode=0 Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.212080 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-5jll8" event={"ID":"fa7c19aa-3bf2-43b5-a9b7-97a66006a856","Type":"ContainerDied","Data":"7f81016fafc1c96e44e4d095033c93456ad0b8c1daf8ad02ae854b67beab9cc4"} Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.212107 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-5jll8" event={"ID":"fa7c19aa-3bf2-43b5-a9b7-97a66006a856","Type":"ContainerStarted","Data":"ecfd3480bb044ce5a91324547aa8c1bfc7d6b73f46f4596ce7959f9466bce9db"} Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.249297 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.293845 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.336938 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b622f630-b396-47ee-ae0f-35e69b98ffe6" path="/var/lib/kubelet/pods/b622f630-b396-47ee-ae0f-35e69b98ffe6/volumes" Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.489660 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.682409 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cj244/crc-debug-5jll8"] Jan 05 22:29:31 crc kubenswrapper[5000]: I0105 22:29:31.692025 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cj244/crc-debug-5jll8"] Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.329054 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.445330 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host\") pod \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.445426 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host" (OuterVolumeSpecName: "host") pod "fa7c19aa-3bf2-43b5-a9b7-97a66006a856" (UID: "fa7c19aa-3bf2-43b5-a9b7-97a66006a856"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.445482 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsnv\" (UniqueName: \"kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv\") pod \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\" (UID: \"fa7c19aa-3bf2-43b5-a9b7-97a66006a856\") " Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.446187 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.452391 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv" (OuterVolumeSpecName: "kube-api-access-4wsnv") pod "fa7c19aa-3bf2-43b5-a9b7-97a66006a856" (UID: "fa7c19aa-3bf2-43b5-a9b7-97a66006a856"). InnerVolumeSpecName "kube-api-access-4wsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.548088 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsnv\" (UniqueName: \"kubernetes.io/projected/fa7c19aa-3bf2-43b5-a9b7-97a66006a856-kube-api-access-4wsnv\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.849207 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cj244/crc-debug-xvxzj"] Jan 05 22:29:32 crc kubenswrapper[5000]: E0105 22:29:32.849594 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" containerName="container-00" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.849609 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" containerName="container-00" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.849806 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" containerName="container-00" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.850669 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.959905 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:32 crc kubenswrapper[5000]: I0105 22:29:32.959998 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x682\" (UniqueName: \"kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.062668 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.062756 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x682\" (UniqueName: \"kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.063035 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.082558 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x682\" (UniqueName: \"kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682\") pod \"crc-debug-xvxzj\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.171169 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:33 crc kubenswrapper[5000]: W0105 22:29:33.199932 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2134b1ec_49bf_4652_963d_09fd1e5d3c5f.slice/crio-8d6f55643a602e944ecc225ae026ace60fdb5c47f4aad008478eaa20d2d2db48 WatchSource:0}: Error finding container 8d6f55643a602e944ecc225ae026ace60fdb5c47f4aad008478eaa20d2d2db48: Status 404 returned error can't find the container with id 8d6f55643a602e944ecc225ae026ace60fdb5c47f4aad008478eaa20d2d2db48 Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.226739 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-xvxzj" event={"ID":"2134b1ec-49bf-4652-963d-09fd1e5d3c5f","Type":"ContainerStarted","Data":"8d6f55643a602e944ecc225ae026ace60fdb5c47f4aad008478eaa20d2d2db48"} Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.231555 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecfd3480bb044ce5a91324547aa8c1bfc7d6b73f46f4596ce7959f9466bce9db" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.231562 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.231683 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rb5rh" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="registry-server" containerID="cri-o://a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d" gracePeriod=2 Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.336640 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" path="/var/lib/kubelet/pods/fa7c19aa-3bf2-43b5-a9b7-97a66006a856/volumes" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.701461 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.879626 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities\") pod \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.880034 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khp46\" (UniqueName: \"kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46\") pod \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.880066 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content\") pod \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\" (UID: \"e06573f3-954a-4a0f-83cd-5499ab3e1d7f\") " Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.880645 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities" (OuterVolumeSpecName: "utilities") pod "e06573f3-954a-4a0f-83cd-5499ab3e1d7f" (UID: "e06573f3-954a-4a0f-83cd-5499ab3e1d7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.880807 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.886151 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46" (OuterVolumeSpecName: "kube-api-access-khp46") pod "e06573f3-954a-4a0f-83cd-5499ab3e1d7f" (UID: "e06573f3-954a-4a0f-83cd-5499ab3e1d7f"). InnerVolumeSpecName "kube-api-access-khp46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.941505 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e06573f3-954a-4a0f-83cd-5499ab3e1d7f" (UID: "e06573f3-954a-4a0f-83cd-5499ab3e1d7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.982522 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khp46\" (UniqueName: \"kubernetes.io/projected/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-kube-api-access-khp46\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:33 crc kubenswrapper[5000]: I0105 22:29:33.982559 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06573f3-954a-4a0f-83cd-5499ab3e1d7f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.242348 5000 generic.go:334] "Generic (PLEG): container finished" podID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerID="a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d" exitCode=0 Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.242406 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerDied","Data":"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d"} Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.242432 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rb5rh" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.242865 5000 scope.go:117] "RemoveContainer" containerID="a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.242813 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rb5rh" event={"ID":"e06573f3-954a-4a0f-83cd-5499ab3e1d7f","Type":"ContainerDied","Data":"fd690ff15877aaaed85a691056992aea2263306c6d0cb98796cd9d3aca70ad91"} Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.245155 5000 generic.go:334] "Generic (PLEG): container finished" podID="2134b1ec-49bf-4652-963d-09fd1e5d3c5f" containerID="972c326ab0688a84439382153567a903500bcab3d4d55f34217f402f9eb35c0c" exitCode=0 Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.245183 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/crc-debug-xvxzj" event={"ID":"2134b1ec-49bf-4652-963d-09fd1e5d3c5f","Type":"ContainerDied","Data":"972c326ab0688a84439382153567a903500bcab3d4d55f34217f402f9eb35c0c"} Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.303250 5000 scope.go:117] "RemoveContainer" containerID="cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.305508 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cj244/crc-debug-xvxzj"] Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.318719 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cj244/crc-debug-xvxzj"] Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.322823 5000 scope.go:117] "RemoveContainer" containerID="4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.326759 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.337367 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rb5rh"] Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.370521 5000 scope.go:117] "RemoveContainer" containerID="a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d" Jan 05 22:29:34 crc kubenswrapper[5000]: E0105 22:29:34.370927 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d\": container with ID starting with a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d not found: ID does not exist" containerID="a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.370968 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d"} err="failed to get container status \"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d\": rpc error: code = NotFound desc = could not find container \"a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d\": container with ID starting with a44620753e5cce1accf1af77ef02125a186c975270c9077fcfb09f7203d7429d not found: ID does not exist" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.370995 5000 scope.go:117] "RemoveContainer" containerID="cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92" Jan 05 22:29:34 crc kubenswrapper[5000]: E0105 22:29:34.371418 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92\": container with ID starting with cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92 not found: ID does not exist" containerID="cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.371451 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92"} err="failed to get container status \"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92\": rpc error: code = NotFound desc = could not find container \"cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92\": container with ID starting with cb3b4d45dacf01945783e79450aca7561d079acb095e1ccf74988cb393eabc92 not found: ID does not exist" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.371471 5000 scope.go:117] "RemoveContainer" containerID="4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd" Jan 05 22:29:34 crc kubenswrapper[5000]: E0105 22:29:34.371714 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd\": container with ID starting with 4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd not found: ID does not exist" containerID="4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd" Jan 05 22:29:34 crc kubenswrapper[5000]: I0105 22:29:34.371742 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd"} err="failed to get container status \"4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd\": rpc error: code = NotFound desc = could not find container \"4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd\": container with ID starting with 4228a85cbb6299b3c841b53d80187146b38b1ec44ae5155d8bb01a772b240abd not found: ID does not exist" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.338467 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" path="/var/lib/kubelet/pods/e06573f3-954a-4a0f-83cd-5499ab3e1d7f/volumes" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.346093 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.507460 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host\") pod \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.507597 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host" (OuterVolumeSpecName: "host") pod "2134b1ec-49bf-4652-963d-09fd1e5d3c5f" (UID: "2134b1ec-49bf-4652-963d-09fd1e5d3c5f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.507611 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x682\" (UniqueName: \"kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682\") pod \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\" (UID: \"2134b1ec-49bf-4652-963d-09fd1e5d3c5f\") " Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.508794 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.513068 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682" (OuterVolumeSpecName: "kube-api-access-5x682") pod "2134b1ec-49bf-4652-963d-09fd1e5d3c5f" (UID: "2134b1ec-49bf-4652-963d-09fd1e5d3c5f"). InnerVolumeSpecName "kube-api-access-5x682". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:29:35 crc kubenswrapper[5000]: I0105 22:29:35.610626 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x682\" (UniqueName: \"kubernetes.io/projected/2134b1ec-49bf-4652-963d-09fd1e5d3c5f-kube-api-access-5x682\") on node \"crc\" DevicePath \"\"" Jan 05 22:29:36 crc kubenswrapper[5000]: I0105 22:29:36.266388 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-xvxzj" Jan 05 22:29:36 crc kubenswrapper[5000]: I0105 22:29:36.266387 5000 scope.go:117] "RemoveContainer" containerID="972c326ab0688a84439382153567a903500bcab3d4d55f34217f402f9eb35c0c" Jan 05 22:29:37 crc kubenswrapper[5000]: I0105 22:29:37.333827 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2134b1ec-49bf-4652-963d-09fd1e5d3c5f" path="/var/lib/kubelet/pods/2134b1ec-49bf-4652-963d-09fd1e5d3c5f/volumes" Jan 05 22:29:49 crc kubenswrapper[5000]: I0105 22:29:49.703646 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59df95cbb-xkgb8_bd1efe56-77b9-43ee-9c00-563a30e3d948/barbican-api/0.log" Jan 05 22:29:49 crc kubenswrapper[5000]: I0105 22:29:49.841690 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59df95cbb-xkgb8_bd1efe56-77b9-43ee-9c00-563a30e3d948/barbican-api-log/0.log" Jan 05 22:29:49 crc kubenswrapper[5000]: I0105 22:29:49.927846 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b7c959586-6rv2n_dc0b4eb9-6ea0-470c-b684-35945245161c/barbican-keystone-listener/0.log" Jan 05 22:29:49 crc kubenswrapper[5000]: I0105 22:29:49.990678 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b7c959586-6rv2n_dc0b4eb9-6ea0-470c-b684-35945245161c/barbican-keystone-listener-log/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.140688 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b6686bbd5-nnkl5_b4c4d270-9b90-47d9-b076-feac4ab48232/barbican-worker/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.157673 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b6686bbd5-nnkl5_b4c4d270-9b90-47d9-b076-feac4ab48232/barbican-worker-log/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.339336 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm_a03fd86d-bb7e-48cb-b37e-f94231148420/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.365071 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/ceilometer-central-agent/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.466967 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/ceilometer-notification-agent/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.532273 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/sg-core/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.560238 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/proxy-httpd/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.714692 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3278f23c-9157-4155-b406-e1ff0591348e/cinder-api/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.731281 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3278f23c-9157-4155-b406-e1ff0591348e/cinder-api-log/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.845789 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ed63e4c-9365-423b-8eaf-a959b812ed86/cinder-scheduler/0.log" Jan 05 22:29:50 crc kubenswrapper[5000]: I0105 22:29:50.930123 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ed63e4c-9365-423b-8eaf-a959b812ed86/probe/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.031036 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh_85045115-6f3e-4624-9e9b-0db7e0a6419e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.134170 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp_83978ac1-3e0e-40e4-9009-0be10125c3a0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.251597 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/init/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.358854 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/init/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.416056 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/dnsmasq-dns/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.453712 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8zgst_65606fc1-6df2-4b19-8964-b69f04feb59b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.682864 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8587a6fa-051f-4c91-bb39-6c9bb628adbb/glance-log/0.log" Jan 05 22:29:51 crc kubenswrapper[5000]: I0105 22:29:51.702813 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8587a6fa-051f-4c91-bb39-6c9bb628adbb/glance-httpd/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.063674 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62ae3bff-5f88-4662-86d4-0a4e1c51c8be/glance-httpd/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.152177 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62ae3bff-5f88-4662-86d4-0a4e1c51c8be/glance-log/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.238392 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f48b4784d-5jgvr_ed51a505-1c96-4f98-879e-75283649a949/horizon/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.418002 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv_854b990c-d8e5-4735-b5d4-a522969647e9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.546238 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f48b4784d-5jgvr_ed51a505-1c96-4f98-879e-75283649a949/horizon-log/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.625955 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qkwpv_7b55f097-bc7e-471e-88de-725221c23439/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.822543 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c8579bfdd-r7vxj_edc2dca8-56cc-43b6-b35d-18b84ff237d3/keystone-api/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.843999 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29460841-tkgzh_15fb1cfb-41eb-4567-a694-821f1da15b07/keystone-cron/0.log" Jan 05 22:29:52 crc kubenswrapper[5000]: I0105 22:29:52.980982 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1cb8a9e8-897c-4005-9ba7-555eeba1b6c1/kube-state-metrics/0.log" Jan 05 22:29:53 crc kubenswrapper[5000]: I0105 22:29:53.060156 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw_d3f9a210-263c-4290-8509-6b86ade6772c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:53 crc kubenswrapper[5000]: I0105 22:29:53.685282 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86bdcd58d9-pztv2_43d9e1d2-3e87-4260-ba24-41e7cfbd4326/neutron-httpd/0.log" Jan 05 22:29:53 crc kubenswrapper[5000]: I0105 22:29:53.689846 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86bdcd58d9-pztv2_43d9e1d2-3e87-4260-ba24-41e7cfbd4326/neutron-api/0.log" Jan 05 22:29:53 crc kubenswrapper[5000]: I0105 22:29:53.912343 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg_e386442b-3735-4e85-8361-5a795c888c81/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:54 crc kubenswrapper[5000]: I0105 22:29:54.486507 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c5dc335-0750-413c-a08d-6aaea2323daf/nova-api-log/0.log" Jan 05 22:29:54 crc kubenswrapper[5000]: I0105 22:29:54.515430 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3c91798a-921c-4031-8e5f-0752bebcc325/nova-cell0-conductor-conductor/0.log" Jan 05 22:29:54 crc kubenswrapper[5000]: I0105 22:29:54.680425 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c5dc335-0750-413c-a08d-6aaea2323daf/nova-api-api/0.log" Jan 05 22:29:54 crc kubenswrapper[5000]: I0105 22:29:54.796598 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f341f64a-418c-4790-a14a-fc9768d6fc82/nova-cell1-conductor-conductor/0.log" Jan 05 22:29:54 crc kubenswrapper[5000]: I0105 22:29:54.847347 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_aa822db9-b962-42dd-a6c8-3774d9c6d477/nova-cell1-novncproxy-novncproxy/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.039766 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-j64gt_50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.208943 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee3ead96-f298-4707-b5aa-3f310fd71ade/nova-metadata-log/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.480261 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a3923d31-eca2-40c4-b412-07b158c9fbcc/nova-scheduler-scheduler/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.715247 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/mysql-bootstrap/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.907418 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/mysql-bootstrap/0.log" Jan 05 22:29:55 crc kubenswrapper[5000]: I0105 22:29:55.966039 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/galera/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.103735 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/mysql-bootstrap/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.240179 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee3ead96-f298-4707-b5aa-3f310fd71ade/nova-metadata-metadata/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.374258 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/galera/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.389861 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/mysql-bootstrap/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.464474 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_046f24d3-66d8-4a8b-bd20-d1f79426033b/openstackclient/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.707401 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-48f9l_2f01d9e3-692b-4648-b57f-3fb13e84379a/openstack-network-exporter/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.736152 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server-init/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.921073 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server-init/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.944859 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server/0.log" Jan 05 22:29:56 crc kubenswrapper[5000]: I0105 22:29:56.972486 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovs-vswitchd/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.160402 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qtwd6_30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1/ovn-controller/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.242829 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jwldt_d4dde70e-892f-44c4-b19d-d2e6292c2e18/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.372862 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_98ae3293-772a-4a0d-8b5e-245e02531e31/ovn-northd/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.387453 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_98ae3293-772a-4a0d-8b5e-245e02531e31/openstack-network-exporter/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.499020 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3e42459b-9f2f-45c6-8a77-6909cc2689a2/openstack-network-exporter/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.588351 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3e42459b-9f2f-45c6-8a77-6909cc2689a2/ovsdbserver-nb/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.703789 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3628fb9-23a7-47e6-853a-e8f31311916f/openstack-network-exporter/0.log" Jan 05 22:29:57 crc kubenswrapper[5000]: I0105 22:29:57.736959 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3628fb9-23a7-47e6-853a-e8f31311916f/ovsdbserver-sb/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.047758 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-859855f89d-t6p2g_1aa85c76-2f7d-4716-bd4c-4f6f53b75d01/placement-api/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.074976 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-859855f89d-t6p2g_1aa85c76-2f7d-4716-bd4c-4f6f53b75d01/placement-log/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.143357 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/setup-container/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.416196 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/setup-container/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.498158 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/setup-container/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.533452 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/rabbitmq/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.651999 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/rabbitmq/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.658353 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/setup-container/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.785262 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2_b441855d-0224-48d7-b39e-0930dbd9d1d5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:58 crc kubenswrapper[5000]: I0105 22:29:58.844605 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8l276_7d8b6f53-b39a-4cd8-9587-92cd0f427528/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.041717 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh_61ec2645-0703-42ad-96da-136ceb8b9cda/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.404346 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-t76t9_500728b5-6ea6-4696-b63d-36d1a1c64cce/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.482141 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n2m7c_c816069b-4834-4cf8-ada8-c7bf3d339ba2/ssh-known-hosts-edpm-deployment/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.713802 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5759bb69bf-chpv9_b3694130-425f-4455-9275-0899d204bc66/proxy-httpd/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.734411 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5759bb69bf-chpv9_b3694130-425f-4455-9275-0899d204bc66/proxy-server/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.790935 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nkpzh_bcee38b5-1aa2-4d3f-8545-dfc618226422/swift-ring-rebalance/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.920089 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-auditor/0.log" Jan 05 22:29:59 crc kubenswrapper[5000]: I0105 22:29:59.946578 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-reaper/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.036901 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-replicator/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.144623 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf"] Jan 05 22:30:00 crc kubenswrapper[5000]: E0105 22:30:00.144999 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="extract-utilities" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145011 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="extract-utilities" Jan 05 22:30:00 crc kubenswrapper[5000]: E0105 22:30:00.145414 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2134b1ec-49bf-4652-963d-09fd1e5d3c5f" containerName="container-00" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145430 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="2134b1ec-49bf-4652-963d-09fd1e5d3c5f" containerName="container-00" Jan 05 22:30:00 crc kubenswrapper[5000]: E0105 22:30:00.145461 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145468 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5000]: E0105 22:30:00.145485 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="extract-content" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145493 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="extract-content" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145760 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="2134b1ec-49bf-4652-963d-09fd1e5d3c5f" containerName="container-00" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.145783 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06573f3-954a-4a0f-83cd-5499ab3e1d7f" containerName="registry-server" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.146734 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.148637 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.149001 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.157976 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf"] Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.208294 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-replicator/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.213562 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-server/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.229113 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-auditor/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.286016 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.286106 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7whgq\" (UniqueName: \"kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.286155 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.331332 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-server/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.387792 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.387877 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7whgq\" (UniqueName: \"kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.387930 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.389239 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.406933 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.407743 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7whgq\" (UniqueName: \"kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq\") pod \"collect-profiles-29460870-v6rlf\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.464167 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-expirer/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.471190 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.495712 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-updater/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.510531 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-auditor/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.599134 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-replicator/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.745295 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-server/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.775531 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/rsync/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.854059 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-updater/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.919165 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/swift-recon-cron/0.log" Jan 05 22:30:00 crc kubenswrapper[5000]: I0105 22:30:00.998513 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf"] Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.107449 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd_9457bd68-0fcd-45ee-9625-4a82d4ad181d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.211793 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_afff7bec-07b5-49b0-9b93-49f90b6c0214/tempest-tests-tempest-tests-runner/0.log" Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.406259 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6b25987a-4797-4b1a-be62-fef207e3aadc/test-operator-logs-container/0.log" Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.534026 5000 generic.go:334] "Generic (PLEG): container finished" podID="4682bb44-e0d3-47e8-a7fb-8868e86d5d66" containerID="1e77ae834b24c5b8f86d17356207a52ace6d0f3471824453f387d5ed64776f6a" exitCode=0 Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.534069 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" event={"ID":"4682bb44-e0d3-47e8-a7fb-8868e86d5d66","Type":"ContainerDied","Data":"1e77ae834b24c5b8f86d17356207a52ace6d0f3471824453f387d5ed64776f6a"} Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.534097 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" event={"ID":"4682bb44-e0d3-47e8-a7fb-8868e86d5d66","Type":"ContainerStarted","Data":"75aa882f838893d9eafb4bebfc7dcdc903a475cf96d2e59321c421259ca9069c"} Jan 05 22:30:01 crc kubenswrapper[5000]: I0105 22:30:01.657348 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6_7cff51b1-fa8c-43c0-8563-b83e0b4542cb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:30:02 crc kubenswrapper[5000]: I0105 22:30:02.907190 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.056516 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume\") pod \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.056565 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume\") pod \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.056773 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7whgq\" (UniqueName: \"kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq\") pod \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\" (UID: \"4682bb44-e0d3-47e8-a7fb-8868e86d5d66\") " Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.058751 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume" (OuterVolumeSpecName: "config-volume") pod "4682bb44-e0d3-47e8-a7fb-8868e86d5d66" (UID: "4682bb44-e0d3-47e8-a7fb-8868e86d5d66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.078989 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4682bb44-e0d3-47e8-a7fb-8868e86d5d66" (UID: "4682bb44-e0d3-47e8-a7fb-8868e86d5d66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.081204 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq" (OuterVolumeSpecName: "kube-api-access-7whgq") pod "4682bb44-e0d3-47e8-a7fb-8868e86d5d66" (UID: "4682bb44-e0d3-47e8-a7fb-8868e86d5d66"). InnerVolumeSpecName "kube-api-access-7whgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.160067 5000 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-config-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.160096 5000 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.160107 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7whgq\" (UniqueName: \"kubernetes.io/projected/4682bb44-e0d3-47e8-a7fb-8868e86d5d66-kube-api-access-7whgq\") on node \"crc\" DevicePath \"\"" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.447729 5000 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfa7c19aa-3bf2-43b5-a9b7-97a66006a856"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfa7c19aa-3bf2-43b5-a9b7-97a66006a856] : Timed out while waiting for systemd to remove kubepods-besteffort-podfa7c19aa_3bf2_43b5_a9b7_97a66006a856.slice" Jan 05 22:30:03 crc kubenswrapper[5000]: E0105 22:30:03.447771 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podfa7c19aa-3bf2-43b5-a9b7-97a66006a856] : unable to destroy cgroup paths for cgroup [kubepods besteffort podfa7c19aa-3bf2-43b5-a9b7-97a66006a856] : Timed out while waiting for systemd to remove kubepods-besteffort-podfa7c19aa_3bf2_43b5_a9b7_97a66006a856.slice" pod="openshift-must-gather-cj244/crc-debug-5jll8" podUID="fa7c19aa-3bf2-43b5-a9b7-97a66006a856" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.552811 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/crc-debug-5jll8" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.553146 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.553579 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29460870-v6rlf" event={"ID":"4682bb44-e0d3-47e8-a7fb-8868e86d5d66","Type":"ContainerDied","Data":"75aa882f838893d9eafb4bebfc7dcdc903a475cf96d2e59321c421259ca9069c"} Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.553607 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75aa882f838893d9eafb4bebfc7dcdc903a475cf96d2e59321c421259ca9069c" Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.985691 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq"] Jan 05 22:30:03 crc kubenswrapper[5000]: I0105 22:30:03.996379 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29460825-pxzpq"] Jan 05 22:30:05 crc kubenswrapper[5000]: I0105 22:30:05.336164 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b634923c-9274-4da5-9d49-783d92f632e9" path="/var/lib/kubelet/pods/b634923c-9274-4da5-9d49-783d92f632e9/volumes" Jan 05 22:30:09 crc kubenswrapper[5000]: I0105 22:30:09.676734 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b7b36978-e904-42dc-b2e9-cfd481f5b6f0/memcached/0.log" Jan 05 22:30:20 crc kubenswrapper[5000]: I0105 22:30:20.995110 5000 scope.go:117] "RemoveContainer" containerID="ac6338554e484ded93f7e8c247f935069a303a184ed78beee6d9d7d431a52eae" Jan 05 22:30:23 crc kubenswrapper[5000]: I0105 22:30:23.099500 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:30:23 crc kubenswrapper[5000]: I0105 22:30:23.100120 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.449391 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.639275 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.654411 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.689590 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.869058 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/extract/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.900816 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:30:27 crc kubenswrapper[5000]: I0105 22:30:27.914982 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.074544 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-mcqdp_3b7bc759-79ec-4375-848d-a4900428e360/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.219481 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-p6wws_97262ac6-99c3-47d4-a2a4-401e945a53c7/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.280254 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-jsbjc_2d94d179-bc23-416d-b4c7-6925b43d7131/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.500418 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-2rhpx_a457b96c-32bc-4fbc-80e2-3567e1fdead4/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.509244 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-2q8d7_a5f4bfce-86d7-4e99-984f-2a834fda3018/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.660274 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-rcwpw_c246b6eb-3f29-404c-8b9c-f96bfc9ac87d/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.944851 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-m8qfg_7750c973-b8d1-47f3-90ed-1034a7e6c33c/manager/0.log" Jan 05 22:30:28 crc kubenswrapper[5000]: I0105 22:30:28.974943 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-n9mxh_87ca26ac-b882-4e9a-8f90-27461a61453e/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.104864 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-h7j5w_fe4fd66d-9294-437e-b21e-c66cf323999e/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.181910 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-zg96g_450de243-6d71-4f61-836a-47028669d2b7/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.304298 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-9smz4_bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.410366 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-ghw2z_d60727e4-58b9-43ed-ae99-0c44cab79dc9/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.611690 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-xrl9g_e376cad9-0c9e-423a-a1fb-b33246417cbb/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.643947 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-h5tz2_bb2dd57d-6d64-4048-b69b-749250d948b9/manager/0.log" Jan 05 22:30:29 crc kubenswrapper[5000]: I0105 22:30:29.759105 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h_f4d8f065-ce54-4bc9-9caf-e6a131e73a35/manager/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.078297 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jm56r_3dfe8a9b-7998-4246-b195-b9a2ab968946/registry-server/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.298425 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-59bf84b846-bghfn_e31709ea-50f3-4b79-9851-e6c21b82aa58/operator/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.469834 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-lh5t8_42922f7b-4e7e-4ef1-b465-936097b98929/manager/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.618748 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-v6nfh_7dab6b1b-c641-4e22-a689-a1dc62da7733/manager/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.805593 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fv4wf_6e1e7b73-65c0-40db-964f-93e2d81d1004/operator/0.log" Jan 05 22:30:30 crc kubenswrapper[5000]: I0105 22:30:30.957022 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-pk7nh_2a8023f1-b9cf-4fa2-b421-b053941d4c42/manager/0.log" Jan 05 22:30:31 crc kubenswrapper[5000]: I0105 22:30:31.011761 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd5f6db77-hgptq_fb31c907-60af-4a8c-a49f-977f28a18e20/manager/0.log" Jan 05 22:30:31 crc kubenswrapper[5000]: I0105 22:30:31.121404 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-9cd8n_1236464f-4580-4f31-ab8b-a22d559aa8c3/manager/0.log" Jan 05 22:30:31 crc kubenswrapper[5000]: I0105 22:30:31.180021 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-dzjnd_95d67b6f-d50a-49c6-b866-9926f4b9e495/manager/0.log" Jan 05 22:30:31 crc kubenswrapper[5000]: I0105 22:30:31.263683 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-whzx7_5830ae86-6c11-4567-8f4a-28d4e3251c07/manager/0.log" Jan 05 22:30:48 crc kubenswrapper[5000]: I0105 22:30:48.635435 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2cjfv_81ff9dcc-be92-40cf-b45b-ba49fc78918a/control-plane-machine-set-operator/0.log" Jan 05 22:30:48 crc kubenswrapper[5000]: I0105 22:30:48.827847 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cfzn2_096d4722-b423-4819-a8fb-61556963fd3a/kube-rbac-proxy/0.log" Jan 05 22:30:48 crc kubenswrapper[5000]: I0105 22:30:48.887773 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cfzn2_096d4722-b423-4819-a8fb-61556963fd3a/machine-api-operator/0.log" Jan 05 22:30:53 crc kubenswrapper[5000]: I0105 22:30:53.099273 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:30:53 crc kubenswrapper[5000]: I0105 22:30:53.100656 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:30:59 crc kubenswrapper[5000]: I0105 22:30:59.661019 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-d7hcb_0edf1980-d816-4cf8-ac70-c0a92cb8ca7c/cert-manager-controller/0.log" Jan 05 22:30:59 crc kubenswrapper[5000]: I0105 22:30:59.884356 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mvh6l_e567f6b1-10dc-4a2a-9ebb-2837b486af32/cert-manager-cainjector/0.log" Jan 05 22:30:59 crc kubenswrapper[5000]: I0105 22:30:59.910998 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pgdwz_61ca53f0-4a50-4090-846e-cfe229006c13/cert-manager-webhook/0.log" Jan 05 22:31:10 crc kubenswrapper[5000]: I0105 22:31:10.829232 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-mwb84_51c01670-2f5f-45e5-b50c-10034384df7b/nmstate-console-plugin/0.log" Jan 05 22:31:10 crc kubenswrapper[5000]: I0105 22:31:10.995145 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sgg82_061efcae-2cef-41f7-bae8-69730db02cf2/nmstate-handler/0.log" Jan 05 22:31:11 crc kubenswrapper[5000]: I0105 22:31:11.026095 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-9zbc5_e754b051-d59b-4f7b-9bd4-8ac140b5a8a3/kube-rbac-proxy/0.log" Jan 05 22:31:11 crc kubenswrapper[5000]: I0105 22:31:11.051043 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-9zbc5_e754b051-d59b-4f7b-9bd4-8ac140b5a8a3/nmstate-metrics/0.log" Jan 05 22:31:11 crc kubenswrapper[5000]: I0105 22:31:11.170202 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-r56zg_e2491ff3-21bb-4019-b297-1e6b0bdd9707/nmstate-operator/0.log" Jan 05 22:31:11 crc kubenswrapper[5000]: I0105 22:31:11.230621 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-hf8ck_251a5c5e-01cb-474f-9271-1d8ec430e9ac/nmstate-webhook/0.log" Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.099146 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.099736 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.099788 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.100596 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.100658 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba" gracePeriod=600 Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.257424 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba" exitCode=0 Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.257468 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba"} Jan 05 22:31:23 crc kubenswrapper[5000]: I0105 22:31:23.257499 5000 scope.go:117] "RemoveContainer" containerID="74d0922dd999794ffdc499cabd2794203366df6f5a303ef028633e608e15bfcf" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.075914 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-fvbvp_768d8155-0383-40d9-993e-fe7a60a3b020/kube-rbac-proxy/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.170412 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-fvbvp_768d8155-0383-40d9-993e-fe7a60a3b020/controller/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.267826 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72"} Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.382261 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-ql6m5_468e8ed3-60c2-4cf4-8c3e-be1d5e91674f/frr-k8s-webhook-server/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.400092 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.601135 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.607092 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.616935 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.622632 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.787559 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.794995 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.824960 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.838314 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:31:24 crc kubenswrapper[5000]: I0105 22:31:24.996325 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.004934 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.008003 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/controller/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.011001 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.162613 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/frr-metrics/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.213343 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/kube-rbac-proxy-frr/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.248154 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/kube-rbac-proxy/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.396535 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/reloader/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.465219 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5786b66bf7-nhsgw_61add664-ba89-4308-a9bc-fedeb78aa01d/manager/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.703951 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-749c9dfbcd-wjtpt_edb7d669-1a88-412b-8629-ef80169998dd/webhook-server/0.log" Jan 05 22:31:25 crc kubenswrapper[5000]: I0105 22:31:25.836109 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7cjvw_b49f39fb-cf2e-4bae-aefd-e476b4155444/kube-rbac-proxy/0.log" Jan 05 22:31:28 crc kubenswrapper[5000]: I0105 22:31:28.430017 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7cjvw_b49f39fb-cf2e-4bae-aefd-e476b4155444/speaker/0.log" Jan 05 22:31:28 crc kubenswrapper[5000]: I0105 22:31:28.852271 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/frr/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.626655 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.626701 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.627993 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.741162 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.892710 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.905583 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:31:39 crc kubenswrapper[5000]: I0105 22:31:39.923775 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/extract/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.051571 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.188754 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.206422 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.234985 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.398148 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.423538 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.426601 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/extract/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.549580 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.702797 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.726706 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.752430 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.892668 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:31:40 crc kubenswrapper[5000]: I0105 22:31:40.901784 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.163780 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.346448 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.355861 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.359645 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/registry-server/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.365270 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.594691 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.608360 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.835980 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d8trn_28f7248c-0908-4c50-8c47-14d96f5c8665/marketplace-operator/0.log" Jan 05 22:31:41 crc kubenswrapper[5000]: I0105 22:31:41.902131 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.065273 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.106105 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/registry-server/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.124248 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.144765 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.363681 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.369721 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.424468 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/registry-server/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.538295 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.706332 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.721961 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.747980 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.883881 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:31:42 crc kubenswrapper[5000]: I0105 22:31:42.941323 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:31:43 crc kubenswrapper[5000]: I0105 22:31:43.387690 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/registry-server/0.log" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.566016 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:08 crc kubenswrapper[5000]: E0105 22:32:08.566809 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4682bb44-e0d3-47e8-a7fb-8868e86d5d66" containerName="collect-profiles" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.566820 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="4682bb44-e0d3-47e8-a7fb-8868e86d5d66" containerName="collect-profiles" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.566997 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="4682bb44-e0d3-47e8-a7fb-8868e86d5d66" containerName="collect-profiles" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.597995 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.601482 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.676015 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.676427 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9bv\" (UniqueName: \"kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.676457 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.779547 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.779646 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9bv\" (UniqueName: \"kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.779673 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.780180 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.780226 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.800916 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9bv\" (UniqueName: \"kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv\") pod \"redhat-operators-d46tt\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:08 crc kubenswrapper[5000]: I0105 22:32:08.929605 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:09 crc kubenswrapper[5000]: I0105 22:32:09.455112 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:10 crc kubenswrapper[5000]: I0105 22:32:10.485261 5000 generic.go:334] "Generic (PLEG): container finished" podID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerID="3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6" exitCode=0 Jan 05 22:32:10 crc kubenswrapper[5000]: I0105 22:32:10.485837 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerDied","Data":"3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6"} Jan 05 22:32:10 crc kubenswrapper[5000]: I0105 22:32:10.485863 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerStarted","Data":"51145a5c78f807f73f713fa7c6032e6b1e7d34f617f50698ca6bb50657ecdf10"} Jan 05 22:32:12 crc kubenswrapper[5000]: I0105 22:32:12.511769 5000 generic.go:334] "Generic (PLEG): container finished" podID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerID="f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592" exitCode=0 Jan 05 22:32:12 crc kubenswrapper[5000]: I0105 22:32:12.512309 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerDied","Data":"f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592"} Jan 05 22:32:13 crc kubenswrapper[5000]: I0105 22:32:13.522132 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerStarted","Data":"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c"} Jan 05 22:32:13 crc kubenswrapper[5000]: I0105 22:32:13.541179 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d46tt" podStartSLOduration=2.856691881 podStartE2EDuration="5.541158516s" podCreationTimestamp="2026-01-05 22:32:08 +0000 UTC" firstStartedPulling="2026-01-05 22:32:10.487729108 +0000 UTC m=+3485.443931577" lastFinishedPulling="2026-01-05 22:32:13.172195743 +0000 UTC m=+3488.128398212" observedRunningTime="2026-01-05 22:32:13.538180462 +0000 UTC m=+3488.494382941" watchObservedRunningTime="2026-01-05 22:32:13.541158516 +0000 UTC m=+3488.497360985" Jan 05 22:32:18 crc kubenswrapper[5000]: I0105 22:32:18.930375 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:18 crc kubenswrapper[5000]: I0105 22:32:18.930957 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:18 crc kubenswrapper[5000]: I0105 22:32:18.980137 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:19 crc kubenswrapper[5000]: I0105 22:32:19.620617 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:19 crc kubenswrapper[5000]: I0105 22:32:19.666861 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:21 crc kubenswrapper[5000]: I0105 22:32:21.591715 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d46tt" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="registry-server" containerID="cri-o://04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c" gracePeriod=2 Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.089790 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.155517 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9bv\" (UniqueName: \"kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv\") pod \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.155716 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content\") pod \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.155775 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities\") pod \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\" (UID: \"6502d619-0e1c-477a-ae1a-fd91ba50ea94\") " Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.157051 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities" (OuterVolumeSpecName: "utilities") pod "6502d619-0e1c-477a-ae1a-fd91ba50ea94" (UID: "6502d619-0e1c-477a-ae1a-fd91ba50ea94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.164503 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv" (OuterVolumeSpecName: "kube-api-access-nm9bv") pod "6502d619-0e1c-477a-ae1a-fd91ba50ea94" (UID: "6502d619-0e1c-477a-ae1a-fd91ba50ea94"). InnerVolumeSpecName "kube-api-access-nm9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.257601 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.257643 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm9bv\" (UniqueName: \"kubernetes.io/projected/6502d619-0e1c-477a-ae1a-fd91ba50ea94-kube-api-access-nm9bv\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.605634 5000 generic.go:334] "Generic (PLEG): container finished" podID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerID="04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c" exitCode=0 Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.605701 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerDied","Data":"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c"} Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.605746 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d46tt" event={"ID":"6502d619-0e1c-477a-ae1a-fd91ba50ea94","Type":"ContainerDied","Data":"51145a5c78f807f73f713fa7c6032e6b1e7d34f617f50698ca6bb50657ecdf10"} Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.605795 5000 scope.go:117] "RemoveContainer" containerID="04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.606042 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d46tt" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.631974 5000 scope.go:117] "RemoveContainer" containerID="f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.656117 5000 scope.go:117] "RemoveContainer" containerID="3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.705995 5000 scope.go:117] "RemoveContainer" containerID="04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c" Jan 05 22:32:22 crc kubenswrapper[5000]: E0105 22:32:22.706464 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c\": container with ID starting with 04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c not found: ID does not exist" containerID="04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.706516 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c"} err="failed to get container status \"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c\": rpc error: code = NotFound desc = could not find container \"04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c\": container with ID starting with 04b6bbfb01105c3e408f61131855db029693ea8e3fcca8ec9d6a97a2b1c95a2c not found: ID does not exist" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.706541 5000 scope.go:117] "RemoveContainer" containerID="f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592" Jan 05 22:32:22 crc kubenswrapper[5000]: E0105 22:32:22.706824 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592\": container with ID starting with f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592 not found: ID does not exist" containerID="f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.706871 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592"} err="failed to get container status \"f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592\": rpc error: code = NotFound desc = could not find container \"f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592\": container with ID starting with f63b79067bad27571e4c6e35000bfa4504f56a49b75d062819f362293487e592 not found: ID does not exist" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.706909 5000 scope.go:117] "RemoveContainer" containerID="3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6" Jan 05 22:32:22 crc kubenswrapper[5000]: E0105 22:32:22.707189 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6\": container with ID starting with 3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6 not found: ID does not exist" containerID="3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6" Jan 05 22:32:22 crc kubenswrapper[5000]: I0105 22:32:22.707221 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6"} err="failed to get container status \"3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6\": rpc error: code = NotFound desc = could not find container \"3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6\": container with ID starting with 3fbcae2058e7612cc22d7c1cdd86e54e79a7d12bcc0c10753152dd9fb1a80cb6 not found: ID does not exist" Jan 05 22:32:23 crc kubenswrapper[5000]: I0105 22:32:23.076543 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6502d619-0e1c-477a-ae1a-fd91ba50ea94" (UID: "6502d619-0e1c-477a-ae1a-fd91ba50ea94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:32:23 crc kubenswrapper[5000]: I0105 22:32:23.176568 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6502d619-0e1c-477a-ae1a-fd91ba50ea94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:23 crc kubenswrapper[5000]: I0105 22:32:23.249301 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:23 crc kubenswrapper[5000]: I0105 22:32:23.259840 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d46tt"] Jan 05 22:32:23 crc kubenswrapper[5000]: I0105 22:32:23.337394 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" path="/var/lib/kubelet/pods/6502d619-0e1c-477a-ae1a-fd91ba50ea94/volumes" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.756441 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:29 crc kubenswrapper[5000]: E0105 22:32:29.759166 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="extract-content" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.759195 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="extract-content" Jan 05 22:32:29 crc kubenswrapper[5000]: E0105 22:32:29.759238 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="registry-server" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.759245 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="registry-server" Jan 05 22:32:29 crc kubenswrapper[5000]: E0105 22:32:29.759259 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="extract-utilities" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.759267 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="extract-utilities" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.759489 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6502d619-0e1c-477a-ae1a-fd91ba50ea94" containerName="registry-server" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.761162 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.774703 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.803672 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.803792 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj5vl\" (UniqueName: \"kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.803881 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.906842 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.907364 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.907493 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj5vl\" (UniqueName: \"kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.909324 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.909753 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:29 crc kubenswrapper[5000]: I0105 22:32:29.928939 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj5vl\" (UniqueName: \"kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl\") pod \"redhat-marketplace-wm4f8\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:30 crc kubenswrapper[5000]: I0105 22:32:30.081408 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:30 crc kubenswrapper[5000]: I0105 22:32:30.554417 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:30 crc kubenswrapper[5000]: I0105 22:32:30.673715 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerStarted","Data":"3206f417b2816acda52dd3531f0e2c7a8c6583dad2efb2c70e1c5a46d51cb44c"} Jan 05 22:32:31 crc kubenswrapper[5000]: I0105 22:32:31.683707 5000 generic.go:334] "Generic (PLEG): container finished" podID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerID="08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3" exitCode=0 Jan 05 22:32:31 crc kubenswrapper[5000]: I0105 22:32:31.684050 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerDied","Data":"08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3"} Jan 05 22:32:32 crc kubenswrapper[5000]: I0105 22:32:32.696517 5000 generic.go:334] "Generic (PLEG): container finished" podID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerID="68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9" exitCode=0 Jan 05 22:32:32 crc kubenswrapper[5000]: I0105 22:32:32.697209 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerDied","Data":"68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9"} Jan 05 22:32:33 crc kubenswrapper[5000]: I0105 22:32:33.709790 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerStarted","Data":"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30"} Jan 05 22:32:33 crc kubenswrapper[5000]: I0105 22:32:33.732475 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wm4f8" podStartSLOduration=3.286581286 podStartE2EDuration="4.732457427s" podCreationTimestamp="2026-01-05 22:32:29 +0000 UTC" firstStartedPulling="2026-01-05 22:32:31.68560039 +0000 UTC m=+3506.641802859" lastFinishedPulling="2026-01-05 22:32:33.131476531 +0000 UTC m=+3508.087679000" observedRunningTime="2026-01-05 22:32:33.730603404 +0000 UTC m=+3508.686805893" watchObservedRunningTime="2026-01-05 22:32:33.732457427 +0000 UTC m=+3508.688659896" Jan 05 22:32:40 crc kubenswrapper[5000]: I0105 22:32:40.081453 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:40 crc kubenswrapper[5000]: I0105 22:32:40.082838 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:40 crc kubenswrapper[5000]: I0105 22:32:40.134039 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:40 crc kubenswrapper[5000]: I0105 22:32:40.835615 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:40 crc kubenswrapper[5000]: I0105 22:32:40.890042 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:42 crc kubenswrapper[5000]: I0105 22:32:42.787466 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wm4f8" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="registry-server" containerID="cri-o://ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30" gracePeriod=2 Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.760629 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.812827 5000 generic.go:334] "Generic (PLEG): container finished" podID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerID="ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30" exitCode=0 Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.812863 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerDied","Data":"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30"} Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.812904 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wm4f8" event={"ID":"81dc269b-1261-42b8-95ed-4bab50c12cda","Type":"ContainerDied","Data":"3206f417b2816acda52dd3531f0e2c7a8c6583dad2efb2c70e1c5a46d51cb44c"} Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.812923 5000 scope.go:117] "RemoveContainer" containerID="ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.812976 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wm4f8" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.830229 5000 scope.go:117] "RemoveContainer" containerID="68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.850566 5000 scope.go:117] "RemoveContainer" containerID="08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.871915 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj5vl\" (UniqueName: \"kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl\") pod \"81dc269b-1261-42b8-95ed-4bab50c12cda\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.872019 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities\") pod \"81dc269b-1261-42b8-95ed-4bab50c12cda\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.872162 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content\") pod \"81dc269b-1261-42b8-95ed-4bab50c12cda\" (UID: \"81dc269b-1261-42b8-95ed-4bab50c12cda\") " Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.873353 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities" (OuterVolumeSpecName: "utilities") pod "81dc269b-1261-42b8-95ed-4bab50c12cda" (UID: "81dc269b-1261-42b8-95ed-4bab50c12cda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.877546 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl" (OuterVolumeSpecName: "kube-api-access-gj5vl") pod "81dc269b-1261-42b8-95ed-4bab50c12cda" (UID: "81dc269b-1261-42b8-95ed-4bab50c12cda"). InnerVolumeSpecName "kube-api-access-gj5vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.896294 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81dc269b-1261-42b8-95ed-4bab50c12cda" (UID: "81dc269b-1261-42b8-95ed-4bab50c12cda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.937332 5000 scope.go:117] "RemoveContainer" containerID="ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30" Jan 05 22:32:43 crc kubenswrapper[5000]: E0105 22:32:43.937869 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30\": container with ID starting with ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30 not found: ID does not exist" containerID="ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.937936 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30"} err="failed to get container status \"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30\": rpc error: code = NotFound desc = could not find container \"ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30\": container with ID starting with ae4b9110528d08543bc1a8bb9bba0f07deffdda571611166a71361eb264c8a30 not found: ID does not exist" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.937962 5000 scope.go:117] "RemoveContainer" containerID="68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9" Jan 05 22:32:43 crc kubenswrapper[5000]: E0105 22:32:43.938342 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9\": container with ID starting with 68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9 not found: ID does not exist" containerID="68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.938365 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9"} err="failed to get container status \"68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9\": rpc error: code = NotFound desc = could not find container \"68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9\": container with ID starting with 68161992b6852f376619eb8ad96cbf634c6dd5d9766aacc65a2a8c20ff93caa9 not found: ID does not exist" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.938377 5000 scope.go:117] "RemoveContainer" containerID="08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3" Jan 05 22:32:43 crc kubenswrapper[5000]: E0105 22:32:43.938813 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3\": container with ID starting with 08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3 not found: ID does not exist" containerID="08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.938854 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3"} err="failed to get container status \"08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3\": rpc error: code = NotFound desc = could not find container \"08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3\": container with ID starting with 08edd65b6e38411d8a26b1a39018ef3ae7cf18124ab46a53686d3c0acf6627b3 not found: ID does not exist" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.975929 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj5vl\" (UniqueName: \"kubernetes.io/projected/81dc269b-1261-42b8-95ed-4bab50c12cda-kube-api-access-gj5vl\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.975965 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:43 crc kubenswrapper[5000]: I0105 22:32:43.975974 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81dc269b-1261-42b8-95ed-4bab50c12cda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:32:44 crc kubenswrapper[5000]: I0105 22:32:44.149951 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:44 crc kubenswrapper[5000]: I0105 22:32:44.156354 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wm4f8"] Jan 05 22:32:45 crc kubenswrapper[5000]: I0105 22:32:45.334669 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" path="/var/lib/kubelet/pods/81dc269b-1261-42b8-95ed-4bab50c12cda/volumes" Jan 05 22:33:22 crc kubenswrapper[5000]: I0105 22:33:22.174772 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerID="751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b" exitCode=0 Jan 05 22:33:22 crc kubenswrapper[5000]: I0105 22:33:22.174938 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cj244/must-gather-wwfz5" event={"ID":"7b84943b-bd96-47dc-94cc-b5e19a994d33","Type":"ContainerDied","Data":"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b"} Jan 05 22:33:22 crc kubenswrapper[5000]: I0105 22:33:22.175811 5000 scope.go:117] "RemoveContainer" containerID="751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b" Jan 05 22:33:22 crc kubenswrapper[5000]: I0105 22:33:22.409220 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj244_must-gather-wwfz5_7b84943b-bd96-47dc-94cc-b5e19a994d33/gather/0.log" Jan 05 22:33:23 crc kubenswrapper[5000]: I0105 22:33:23.098726 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:33:23 crc kubenswrapper[5000]: I0105 22:33:23.098794 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.226236 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cj244/must-gather-wwfz5"] Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.227170 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cj244/must-gather-wwfz5" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="copy" containerID="cri-o://c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1" gracePeriod=2 Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.235608 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cj244/must-gather-wwfz5"] Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.741452 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj244_must-gather-wwfz5_7b84943b-bd96-47dc-94cc-b5e19a994d33/copy/0.log" Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.742202 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.931637 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfkfz\" (UniqueName: \"kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz\") pod \"7b84943b-bd96-47dc-94cc-b5e19a994d33\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.931728 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output\") pod \"7b84943b-bd96-47dc-94cc-b5e19a994d33\" (UID: \"7b84943b-bd96-47dc-94cc-b5e19a994d33\") " Jan 05 22:33:30 crc kubenswrapper[5000]: I0105 22:33:30.937955 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz" (OuterVolumeSpecName: "kube-api-access-hfkfz") pod "7b84943b-bd96-47dc-94cc-b5e19a994d33" (UID: "7b84943b-bd96-47dc-94cc-b5e19a994d33"). InnerVolumeSpecName "kube-api-access-hfkfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.033988 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfkfz\" (UniqueName: \"kubernetes.io/projected/7b84943b-bd96-47dc-94cc-b5e19a994d33-kube-api-access-hfkfz\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.069787 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7b84943b-bd96-47dc-94cc-b5e19a994d33" (UID: "7b84943b-bd96-47dc-94cc-b5e19a994d33"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.138209 5000 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7b84943b-bd96-47dc-94cc-b5e19a994d33-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.268556 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cj244_must-gather-wwfz5_7b84943b-bd96-47dc-94cc-b5e19a994d33/copy/0.log" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.268973 5000 generic.go:334] "Generic (PLEG): container finished" podID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerID="c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1" exitCode=143 Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.269021 5000 scope.go:117] "RemoveContainer" containerID="c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.269138 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cj244/must-gather-wwfz5" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.299327 5000 scope.go:117] "RemoveContainer" containerID="751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.336382 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" path="/var/lib/kubelet/pods/7b84943b-bd96-47dc-94cc-b5e19a994d33/volumes" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.434162 5000 scope.go:117] "RemoveContainer" containerID="c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1" Jan 05 22:33:31 crc kubenswrapper[5000]: E0105 22:33:31.434693 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1\": container with ID starting with c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1 not found: ID does not exist" containerID="c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.434756 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1"} err="failed to get container status \"c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1\": rpc error: code = NotFound desc = could not find container \"c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1\": container with ID starting with c8b3d3819b823a5ead507b67d9c9ac3aee33fc507818f4bfde9835b2797e0dd1 not found: ID does not exist" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.434801 5000 scope.go:117] "RemoveContainer" containerID="751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b" Jan 05 22:33:31 crc kubenswrapper[5000]: E0105 22:33:31.435379 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b\": container with ID starting with 751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b not found: ID does not exist" containerID="751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b" Jan 05 22:33:31 crc kubenswrapper[5000]: I0105 22:33:31.435432 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b"} err="failed to get container status \"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b\": rpc error: code = NotFound desc = could not find container \"751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b\": container with ID starting with 751ec60a037c77cfd45c2a0d134388a4d8ac4083b5e1c72b46bcf48cf140740b not found: ID does not exist" Jan 05 22:33:53 crc kubenswrapper[5000]: I0105 22:33:53.099322 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:33:53 crc kubenswrapper[5000]: I0105 22:33:53.099842 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.099363 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.099973 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.100027 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.100965 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.101037 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" gracePeriod=600 Jan 05 22:34:23 crc kubenswrapper[5000]: E0105 22:34:23.226202 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.726051 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" exitCode=0 Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.726090 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72"} Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.726121 5000 scope.go:117] "RemoveContainer" containerID="d12dc9705c21cac0e64dbae7543b906333864b72115a69c82d503f1459f34fba" Jan 05 22:34:23 crc kubenswrapper[5000]: I0105 22:34:23.726629 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:34:23 crc kubenswrapper[5000]: E0105 22:34:23.726853 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:34:35 crc kubenswrapper[5000]: I0105 22:34:35.332143 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:34:35 crc kubenswrapper[5000]: E0105 22:34:35.332923 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:34:49 crc kubenswrapper[5000]: I0105 22:34:49.324275 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:34:49 crc kubenswrapper[5000]: E0105 22:34:49.325015 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:35:00 crc kubenswrapper[5000]: I0105 22:35:00.324617 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:35:00 crc kubenswrapper[5000]: E0105 22:35:00.325494 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.376088 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:10 crc kubenswrapper[5000]: E0105 22:35:10.377028 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="gather" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377041 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="gather" Jan 05 22:35:10 crc kubenswrapper[5000]: E0105 22:35:10.377070 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="extract-utilities" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377076 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="extract-utilities" Jan 05 22:35:10 crc kubenswrapper[5000]: E0105 22:35:10.377084 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="copy" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377091 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="copy" Jan 05 22:35:10 crc kubenswrapper[5000]: E0105 22:35:10.377102 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="extract-content" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377107 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="extract-content" Jan 05 22:35:10 crc kubenswrapper[5000]: E0105 22:35:10.377120 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="registry-server" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377126 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="registry-server" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377308 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="gather" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377325 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b84943b-bd96-47dc-94cc-b5e19a994d33" containerName="copy" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.377342 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="81dc269b-1261-42b8-95ed-4bab50c12cda" containerName="registry-server" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.384412 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.392531 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.508699 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.508771 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.508986 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkbg\" (UniqueName: \"kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.610882 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkbg\" (UniqueName: \"kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.610986 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.611037 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.611727 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.611751 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.628262 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkbg\" (UniqueName: \"kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg\") pod \"certified-operators-l86nq\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:10 crc kubenswrapper[5000]: I0105 22:35:10.705001 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:11 crc kubenswrapper[5000]: I0105 22:35:11.222704 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:12 crc kubenswrapper[5000]: I0105 22:35:12.146142 5000 generic.go:334] "Generic (PLEG): container finished" podID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerID="32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86" exitCode=0 Jan 05 22:35:12 crc kubenswrapper[5000]: I0105 22:35:12.146216 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerDied","Data":"32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86"} Jan 05 22:35:12 crc kubenswrapper[5000]: I0105 22:35:12.146522 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerStarted","Data":"6ba537e9c83162d8d274242e1d5872763752e2b7086f3a64fc830e7711cd3063"} Jan 05 22:35:12 crc kubenswrapper[5000]: I0105 22:35:12.149826 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:35:12 crc kubenswrapper[5000]: I0105 22:35:12.324410 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:35:12 crc kubenswrapper[5000]: E0105 22:35:12.325066 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:35:14 crc kubenswrapper[5000]: I0105 22:35:14.163932 5000 generic.go:334] "Generic (PLEG): container finished" podID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerID="4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732" exitCode=0 Jan 05 22:35:14 crc kubenswrapper[5000]: I0105 22:35:14.164003 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerDied","Data":"4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732"} Jan 05 22:35:15 crc kubenswrapper[5000]: I0105 22:35:15.175820 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerStarted","Data":"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f"} Jan 05 22:35:15 crc kubenswrapper[5000]: I0105 22:35:15.199075 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l86nq" podStartSLOduration=2.717182014 podStartE2EDuration="5.199057851s" podCreationTimestamp="2026-01-05 22:35:10 +0000 UTC" firstStartedPulling="2026-01-05 22:35:12.149605267 +0000 UTC m=+3667.105807736" lastFinishedPulling="2026-01-05 22:35:14.631481104 +0000 UTC m=+3669.587683573" observedRunningTime="2026-01-05 22:35:15.191553427 +0000 UTC m=+3670.147755896" watchObservedRunningTime="2026-01-05 22:35:15.199057851 +0000 UTC m=+3670.155260320" Jan 05 22:35:20 crc kubenswrapper[5000]: I0105 22:35:20.705241 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:20 crc kubenswrapper[5000]: I0105 22:35:20.706267 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:20 crc kubenswrapper[5000]: I0105 22:35:20.754541 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:21 crc kubenswrapper[5000]: I0105 22:35:21.193587 5000 scope.go:117] "RemoveContainer" containerID="d8316623c2638ac607d0ccdc16d59b38a39ac4d2be24b6693565543cb1f30d63" Jan 05 22:35:21 crc kubenswrapper[5000]: I0105 22:35:21.282380 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:22 crc kubenswrapper[5000]: I0105 22:35:22.566390 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.243679 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l86nq" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="registry-server" containerID="cri-o://ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f" gracePeriod=2 Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.773570 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.865279 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content\") pod \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.865352 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnkbg\" (UniqueName: \"kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg\") pod \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.865453 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities\") pod \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\" (UID: \"0b09b7c4-9cc6-4ed9-999b-e7544faec215\") " Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.866590 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities" (OuterVolumeSpecName: "utilities") pod "0b09b7c4-9cc6-4ed9-999b-e7544faec215" (UID: "0b09b7c4-9cc6-4ed9-999b-e7544faec215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.871751 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg" (OuterVolumeSpecName: "kube-api-access-dnkbg") pod "0b09b7c4-9cc6-4ed9-999b-e7544faec215" (UID: "0b09b7c4-9cc6-4ed9-999b-e7544faec215"). InnerVolumeSpecName "kube-api-access-dnkbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.930562 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b09b7c4-9cc6-4ed9-999b-e7544faec215" (UID: "0b09b7c4-9cc6-4ed9-999b-e7544faec215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.967748 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.967788 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnkbg\" (UniqueName: \"kubernetes.io/projected/0b09b7c4-9cc6-4ed9-999b-e7544faec215-kube-api-access-dnkbg\") on node \"crc\" DevicePath \"\"" Jan 05 22:35:23 crc kubenswrapper[5000]: I0105 22:35:23.967815 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b09b7c4-9cc6-4ed9-999b-e7544faec215-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.253332 5000 generic.go:334] "Generic (PLEG): container finished" podID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerID="ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f" exitCode=0 Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.253386 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerDied","Data":"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f"} Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.253424 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l86nq" event={"ID":"0b09b7c4-9cc6-4ed9-999b-e7544faec215","Type":"ContainerDied","Data":"6ba537e9c83162d8d274242e1d5872763752e2b7086f3a64fc830e7711cd3063"} Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.253447 5000 scope.go:117] "RemoveContainer" containerID="ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.253493 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l86nq" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.273975 5000 scope.go:117] "RemoveContainer" containerID="4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.289972 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.297429 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l86nq"] Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.328603 5000 scope.go:117] "RemoveContainer" containerID="32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.347916 5000 scope.go:117] "RemoveContainer" containerID="ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f" Jan 05 22:35:24 crc kubenswrapper[5000]: E0105 22:35:24.348382 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f\": container with ID starting with ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f not found: ID does not exist" containerID="ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.348412 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f"} err="failed to get container status \"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f\": rpc error: code = NotFound desc = could not find container \"ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f\": container with ID starting with ef637645b5e1af2ff1fb85460152d2b44a1fb4424c3e8a7b0e5cb01cf504ee2f not found: ID does not exist" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.348433 5000 scope.go:117] "RemoveContainer" containerID="4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732" Jan 05 22:35:24 crc kubenswrapper[5000]: E0105 22:35:24.348748 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732\": container with ID starting with 4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732 not found: ID does not exist" containerID="4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.348773 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732"} err="failed to get container status \"4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732\": rpc error: code = NotFound desc = could not find container \"4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732\": container with ID starting with 4b3476009447d6cd86239c28cfacf6b69f175ee2f138dca19abdfd9fc8e1d732 not found: ID does not exist" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.348786 5000 scope.go:117] "RemoveContainer" containerID="32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86" Jan 05 22:35:24 crc kubenswrapper[5000]: E0105 22:35:24.349169 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86\": container with ID starting with 32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86 not found: ID does not exist" containerID="32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86" Jan 05 22:35:24 crc kubenswrapper[5000]: I0105 22:35:24.349214 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86"} err="failed to get container status \"32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86\": rpc error: code = NotFound desc = could not find container \"32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86\": container with ID starting with 32f8baaa37626222616b464f6e5d055f0e6e66899428b416b796f0537584eb86 not found: ID does not exist" Jan 05 22:35:25 crc kubenswrapper[5000]: I0105 22:35:25.337239 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" path="/var/lib/kubelet/pods/0b09b7c4-9cc6-4ed9-999b-e7544faec215/volumes" Jan 05 22:35:26 crc kubenswrapper[5000]: I0105 22:35:26.323465 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:35:26 crc kubenswrapper[5000]: E0105 22:35:26.323881 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:35:41 crc kubenswrapper[5000]: I0105 22:35:41.324145 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:35:41 crc kubenswrapper[5000]: E0105 22:35:41.324970 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:35:55 crc kubenswrapper[5000]: I0105 22:35:55.329029 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:35:55 crc kubenswrapper[5000]: E0105 22:35:55.331418 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:36:09 crc kubenswrapper[5000]: I0105 22:36:09.329378 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:36:09 crc kubenswrapper[5000]: E0105 22:36:09.330032 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:36:21 crc kubenswrapper[5000]: I0105 22:36:21.256025 5000 scope.go:117] "RemoveContainer" containerID="7f81016fafc1c96e44e4d095033c93456ad0b8c1daf8ad02ae854b67beab9cc4" Jan 05 22:36:21 crc kubenswrapper[5000]: I0105 22:36:21.324000 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:36:21 crc kubenswrapper[5000]: E0105 22:36:21.324276 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.848926 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qd74t/must-gather-wpvps"] Jan 05 22:36:22 crc kubenswrapper[5000]: E0105 22:36:22.849726 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="registry-server" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.849741 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="registry-server" Jan 05 22:36:22 crc kubenswrapper[5000]: E0105 22:36:22.849761 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="extract-content" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.849768 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="extract-content" Jan 05 22:36:22 crc kubenswrapper[5000]: E0105 22:36:22.849780 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="extract-utilities" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.849788 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="extract-utilities" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.850029 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b09b7c4-9cc6-4ed9-999b-e7544faec215" containerName="registry-server" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.851161 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.853739 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qd74t"/"kube-root-ca.crt" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.853836 5000 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qd74t"/"openshift-service-ca.crt" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.868106 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qd74t/must-gather-wpvps"] Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.948853 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrgz\" (UniqueName: \"kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:22 crc kubenswrapper[5000]: I0105 22:36:22.948929 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.050485 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrgz\" (UniqueName: \"kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.050547 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.051015 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.074859 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrgz\" (UniqueName: \"kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz\") pod \"must-gather-wpvps\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.173037 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.633036 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qd74t/must-gather-wpvps"] Jan 05 22:36:23 crc kubenswrapper[5000]: W0105 22:36:23.643112 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074fa3b1_2fbf_4625_b2dd_418fc809bc81.slice/crio-948e689efc851ae4d63c05e65a7c100d19bac375b5730bdf1b41c6f06c1c0933 WatchSource:0}: Error finding container 948e689efc851ae4d63c05e65a7c100d19bac375b5730bdf1b41c6f06c1c0933: Status 404 returned error can't find the container with id 948e689efc851ae4d63c05e65a7c100d19bac375b5730bdf1b41c6f06c1c0933 Jan 05 22:36:23 crc kubenswrapper[5000]: I0105 22:36:23.752796 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/must-gather-wpvps" event={"ID":"074fa3b1-2fbf-4625-b2dd-418fc809bc81","Type":"ContainerStarted","Data":"948e689efc851ae4d63c05e65a7c100d19bac375b5730bdf1b41c6f06c1c0933"} Jan 05 22:36:24 crc kubenswrapper[5000]: I0105 22:36:24.761351 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/must-gather-wpvps" event={"ID":"074fa3b1-2fbf-4625-b2dd-418fc809bc81","Type":"ContainerStarted","Data":"980c860cc74bbe871e5a78c019147d8def9ff18accabe2e265a995b0491a3753"} Jan 05 22:36:24 crc kubenswrapper[5000]: I0105 22:36:24.761655 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/must-gather-wpvps" event={"ID":"074fa3b1-2fbf-4625-b2dd-418fc809bc81","Type":"ContainerStarted","Data":"0b85aedafb603ab219985b3c301f3a439e0c9366d28c4d39c5db592fff024b98"} Jan 05 22:36:24 crc kubenswrapper[5000]: I0105 22:36:24.781852 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qd74t/must-gather-wpvps" podStartSLOduration=2.781830969 podStartE2EDuration="2.781830969s" podCreationTimestamp="2026-01-05 22:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:36:24.777011823 +0000 UTC m=+3739.733214302" watchObservedRunningTime="2026-01-05 22:36:24.781830969 +0000 UTC m=+3739.738033438" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.524546 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qd74t/crc-debug-dwtqf"] Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.527225 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.529688 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qd74t"/"default-dockercfg-jcfq6" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.657371 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.657681 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vg4l\" (UniqueName: \"kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.759389 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.759513 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vg4l\" (UniqueName: \"kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.760367 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.782878 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vg4l\" (UniqueName: \"kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l\") pod \"crc-debug-dwtqf\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:27 crc kubenswrapper[5000]: I0105 22:36:27.859200 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:36:28 crc kubenswrapper[5000]: I0105 22:36:28.832480 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" event={"ID":"a8d3e7ef-a49d-4ff6-aaf9-923726c795db","Type":"ContainerStarted","Data":"bc3947be244ade408f2ae7b05571388ca32e919fcfaf6fc8d39c6f502eb98b35"} Jan 05 22:36:28 crc kubenswrapper[5000]: I0105 22:36:28.833073 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" event={"ID":"a8d3e7ef-a49d-4ff6-aaf9-923726c795db","Type":"ContainerStarted","Data":"7fa8073bdea361560964c04855ed67b31b3c1f3cc1b299e9fb341b393ef007cc"} Jan 05 22:36:28 crc kubenswrapper[5000]: I0105 22:36:28.909406 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" podStartSLOduration=1.909385594 podStartE2EDuration="1.909385594s" podCreationTimestamp="2026-01-05 22:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-05 22:36:28.903750414 +0000 UTC m=+3743.859952893" watchObservedRunningTime="2026-01-05 22:36:28.909385594 +0000 UTC m=+3743.865588063" Jan 05 22:36:35 crc kubenswrapper[5000]: I0105 22:36:35.335820 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:36:35 crc kubenswrapper[5000]: E0105 22:36:35.336696 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:36:47 crc kubenswrapper[5000]: I0105 22:36:47.324498 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:36:47 crc kubenswrapper[5000]: E0105 22:36:47.325237 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:37:01 crc kubenswrapper[5000]: I0105 22:37:01.323829 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:37:01 crc kubenswrapper[5000]: E0105 22:37:01.324680 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:37:05 crc kubenswrapper[5000]: I0105 22:37:05.115706 5000 generic.go:334] "Generic (PLEG): container finished" podID="a8d3e7ef-a49d-4ff6-aaf9-923726c795db" containerID="bc3947be244ade408f2ae7b05571388ca32e919fcfaf6fc8d39c6f502eb98b35" exitCode=0 Jan 05 22:37:05 crc kubenswrapper[5000]: I0105 22:37:05.115777 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" event={"ID":"a8d3e7ef-a49d-4ff6-aaf9-923726c795db","Type":"ContainerDied","Data":"bc3947be244ade408f2ae7b05571388ca32e919fcfaf6fc8d39c6f502eb98b35"} Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.243901 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.277294 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-dwtqf"] Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.287332 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-dwtqf"] Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.301843 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host\") pod \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.301958 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host" (OuterVolumeSpecName: "host") pod "a8d3e7ef-a49d-4ff6-aaf9-923726c795db" (UID: "a8d3e7ef-a49d-4ff6-aaf9-923726c795db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.302089 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vg4l\" (UniqueName: \"kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l\") pod \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\" (UID: \"a8d3e7ef-a49d-4ff6-aaf9-923726c795db\") " Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.302510 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.307409 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l" (OuterVolumeSpecName: "kube-api-access-6vg4l") pod "a8d3e7ef-a49d-4ff6-aaf9-923726c795db" (UID: "a8d3e7ef-a49d-4ff6-aaf9-923726c795db"). InnerVolumeSpecName "kube-api-access-6vg4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:37:06 crc kubenswrapper[5000]: I0105 22:37:06.404540 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vg4l\" (UniqueName: \"kubernetes.io/projected/a8d3e7ef-a49d-4ff6-aaf9-923726c795db-kube-api-access-6vg4l\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.135551 5000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa8073bdea361560964c04855ed67b31b3c1f3cc1b299e9fb341b393ef007cc" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.135938 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-dwtqf" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.333916 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d3e7ef-a49d-4ff6-aaf9-923726c795db" path="/var/lib/kubelet/pods/a8d3e7ef-a49d-4ff6-aaf9-923726c795db/volumes" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.447058 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qd74t/crc-debug-7knjz"] Jan 05 22:37:07 crc kubenswrapper[5000]: E0105 22:37:07.447409 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d3e7ef-a49d-4ff6-aaf9-923726c795db" containerName="container-00" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.447425 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d3e7ef-a49d-4ff6-aaf9-923726c795db" containerName="container-00" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.447613 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d3e7ef-a49d-4ff6-aaf9-923726c795db" containerName="container-00" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.448203 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.453040 5000 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qd74t"/"default-dockercfg-jcfq6" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.524983 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.525127 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfpt\" (UniqueName: \"kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.627324 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.627452 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrfpt\" (UniqueName: \"kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.627534 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.646815 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrfpt\" (UniqueName: \"kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt\") pod \"crc-debug-7knjz\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: I0105 22:37:07.764637 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:07 crc kubenswrapper[5000]: W0105 22:37:07.792995 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d370d75_1872_411b_b0ac_de0447394080.slice/crio-361395ae6074e39e7426dbc4524d9130f19caf2f64b5bbf9eafc2d22a1a8ecd9 WatchSource:0}: Error finding container 361395ae6074e39e7426dbc4524d9130f19caf2f64b5bbf9eafc2d22a1a8ecd9: Status 404 returned error can't find the container with id 361395ae6074e39e7426dbc4524d9130f19caf2f64b5bbf9eafc2d22a1a8ecd9 Jan 05 22:37:08 crc kubenswrapper[5000]: I0105 22:37:08.145728 5000 generic.go:334] "Generic (PLEG): container finished" podID="3d370d75-1872-411b-b0ac-de0447394080" containerID="574c45d465c4fde338950bcc63c12a95e12671c6aa904f677ca292dd01d79b83" exitCode=0 Jan 05 22:37:08 crc kubenswrapper[5000]: I0105 22:37:08.145805 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-7knjz" event={"ID":"3d370d75-1872-411b-b0ac-de0447394080","Type":"ContainerDied","Data":"574c45d465c4fde338950bcc63c12a95e12671c6aa904f677ca292dd01d79b83"} Jan 05 22:37:08 crc kubenswrapper[5000]: I0105 22:37:08.146160 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-7knjz" event={"ID":"3d370d75-1872-411b-b0ac-de0447394080","Type":"ContainerStarted","Data":"361395ae6074e39e7426dbc4524d9130f19caf2f64b5bbf9eafc2d22a1a8ecd9"} Jan 05 22:37:08 crc kubenswrapper[5000]: I0105 22:37:08.604094 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-7knjz"] Jan 05 22:37:08 crc kubenswrapper[5000]: I0105 22:37:08.615601 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-7knjz"] Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.262804 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.358457 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfpt\" (UniqueName: \"kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt\") pod \"3d370d75-1872-411b-b0ac-de0447394080\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.358546 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host\") pod \"3d370d75-1872-411b-b0ac-de0447394080\" (UID: \"3d370d75-1872-411b-b0ac-de0447394080\") " Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.358634 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host" (OuterVolumeSpecName: "host") pod "3d370d75-1872-411b-b0ac-de0447394080" (UID: "3d370d75-1872-411b-b0ac-de0447394080"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.359132 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d370d75-1872-411b-b0ac-de0447394080-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.365053 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt" (OuterVolumeSpecName: "kube-api-access-jrfpt") pod "3d370d75-1872-411b-b0ac-de0447394080" (UID: "3d370d75-1872-411b-b0ac-de0447394080"). InnerVolumeSpecName "kube-api-access-jrfpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.460878 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrfpt\" (UniqueName: \"kubernetes.io/projected/3d370d75-1872-411b-b0ac-de0447394080-kube-api-access-jrfpt\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.785597 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qd74t/crc-debug-mlmk7"] Jan 05 22:37:09 crc kubenswrapper[5000]: E0105 22:37:09.786230 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d370d75-1872-411b-b0ac-de0447394080" containerName="container-00" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.786270 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d370d75-1872-411b-b0ac-de0447394080" containerName="container-00" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.786554 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d370d75-1872-411b-b0ac-de0447394080" containerName="container-00" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.787312 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.868649 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqn4\" (UniqueName: \"kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.868737 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.971009 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgqn4\" (UniqueName: \"kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.971087 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.971332 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:09 crc kubenswrapper[5000]: I0105 22:37:09.995001 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgqn4\" (UniqueName: \"kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4\") pod \"crc-debug-mlmk7\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:10 crc kubenswrapper[5000]: I0105 22:37:10.104270 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:10 crc kubenswrapper[5000]: I0105 22:37:10.162849 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" event={"ID":"e4d15f90-3765-4845-aec6-31138c05baab","Type":"ContainerStarted","Data":"3ac369c46e734bf98f4e59e8a46338ca01db4bdbd3f99a3e76a737aa7dc69de7"} Jan 05 22:37:10 crc kubenswrapper[5000]: I0105 22:37:10.164445 5000 scope.go:117] "RemoveContainer" containerID="574c45d465c4fde338950bcc63c12a95e12671c6aa904f677ca292dd01d79b83" Jan 05 22:37:10 crc kubenswrapper[5000]: I0105 22:37:10.164549 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-7knjz" Jan 05 22:37:11 crc kubenswrapper[5000]: I0105 22:37:11.174811 5000 generic.go:334] "Generic (PLEG): container finished" podID="e4d15f90-3765-4845-aec6-31138c05baab" containerID="938dc28fe7f15f7005d88388f4bc9d4da0a4c57a3fea0c41c0bc7dc28808b60d" exitCode=0 Jan 05 22:37:11 crc kubenswrapper[5000]: I0105 22:37:11.174913 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" event={"ID":"e4d15f90-3765-4845-aec6-31138c05baab","Type":"ContainerDied","Data":"938dc28fe7f15f7005d88388f4bc9d4da0a4c57a3fea0c41c0bc7dc28808b60d"} Jan 05 22:37:11 crc kubenswrapper[5000]: I0105 22:37:11.209033 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-mlmk7"] Jan 05 22:37:11 crc kubenswrapper[5000]: I0105 22:37:11.216435 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qd74t/crc-debug-mlmk7"] Jan 05 22:37:11 crc kubenswrapper[5000]: I0105 22:37:11.334386 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d370d75-1872-411b-b0ac-de0447394080" path="/var/lib/kubelet/pods/3d370d75-1872-411b-b0ac-de0447394080/volumes" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.283846 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.324436 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:37:12 crc kubenswrapper[5000]: E0105 22:37:12.325060 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.412404 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgqn4\" (UniqueName: \"kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4\") pod \"e4d15f90-3765-4845-aec6-31138c05baab\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.412725 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host\") pod \"e4d15f90-3765-4845-aec6-31138c05baab\" (UID: \"e4d15f90-3765-4845-aec6-31138c05baab\") " Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.412829 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host" (OuterVolumeSpecName: "host") pod "e4d15f90-3765-4845-aec6-31138c05baab" (UID: "e4d15f90-3765-4845-aec6-31138c05baab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.413523 5000 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4d15f90-3765-4845-aec6-31138c05baab-host\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.423078 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4" (OuterVolumeSpecName: "kube-api-access-rgqn4") pod "e4d15f90-3765-4845-aec6-31138c05baab" (UID: "e4d15f90-3765-4845-aec6-31138c05baab"). InnerVolumeSpecName "kube-api-access-rgqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:37:12 crc kubenswrapper[5000]: I0105 22:37:12.516047 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgqn4\" (UniqueName: \"kubernetes.io/projected/e4d15f90-3765-4845-aec6-31138c05baab-kube-api-access-rgqn4\") on node \"crc\" DevicePath \"\"" Jan 05 22:37:13 crc kubenswrapper[5000]: I0105 22:37:13.191877 5000 scope.go:117] "RemoveContainer" containerID="938dc28fe7f15f7005d88388f4bc9d4da0a4c57a3fea0c41c0bc7dc28808b60d" Jan 05 22:37:13 crc kubenswrapper[5000]: I0105 22:37:13.191919 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/crc-debug-mlmk7" Jan 05 22:37:13 crc kubenswrapper[5000]: I0105 22:37:13.334664 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d15f90-3765-4845-aec6-31138c05baab" path="/var/lib/kubelet/pods/e4d15f90-3765-4845-aec6-31138c05baab/volumes" Jan 05 22:37:25 crc kubenswrapper[5000]: I0105 22:37:25.353583 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:37:25 crc kubenswrapper[5000]: E0105 22:37:25.354994 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:37:29 crc kubenswrapper[5000]: I0105 22:37:29.961776 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59df95cbb-xkgb8_bd1efe56-77b9-43ee-9c00-563a30e3d948/barbican-api/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.143861 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59df95cbb-xkgb8_bd1efe56-77b9-43ee-9c00-563a30e3d948/barbican-api-log/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.147820 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b7c959586-6rv2n_dc0b4eb9-6ea0-470c-b684-35945245161c/barbican-keystone-listener/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.162077 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b7c959586-6rv2n_dc0b4eb9-6ea0-470c-b684-35945245161c/barbican-keystone-listener-log/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.342414 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b6686bbd5-nnkl5_b4c4d270-9b90-47d9-b076-feac4ab48232/barbican-worker-log/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.363627 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b6686bbd5-nnkl5_b4c4d270-9b90-47d9-b076-feac4ab48232/barbican-worker/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.539583 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v9fnm_a03fd86d-bb7e-48cb-b37e-f94231148420/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.577016 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/ceilometer-central-agent/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.630917 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/ceilometer-notification-agent/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.733385 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/proxy-httpd/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.757684 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b9197e8a-5f9e-47b8-9da6-d9ff3cf8ddb1/sg-core/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.854252 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3278f23c-9157-4155-b406-e1ff0591348e/cinder-api/0.log" Jan 05 22:37:30 crc kubenswrapper[5000]: I0105 22:37:30.947908 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3278f23c-9157-4155-b406-e1ff0591348e/cinder-api-log/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.027145 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ed63e4c-9365-423b-8eaf-a959b812ed86/probe/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.098514 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ed63e4c-9365-423b-8eaf-a959b812ed86/cinder-scheduler/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.178680 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wcqhh_85045115-6f3e-4624-9e9b-0db7e0a6419e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.322478 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5f6vp_83978ac1-3e0e-40e4-9009-0be10125c3a0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.402024 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/init/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.594198 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/dnsmasq-dns/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.608434 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8zgst_65606fc1-6df2-4b19-8964-b69f04feb59b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.614189 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-n9pp4_0814f5ce-cff2-445e-9207-664fdcb0e357/init/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.782310 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8587a6fa-051f-4c91-bb39-6c9bb628adbb/glance-httpd/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.814747 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8587a6fa-051f-4c91-bb39-6c9bb628adbb/glance-log/0.log" Jan 05 22:37:31 crc kubenswrapper[5000]: I0105 22:37:31.964819 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62ae3bff-5f88-4662-86d4-0a4e1c51c8be/glance-httpd/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.011438 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62ae3bff-5f88-4662-86d4-0a4e1c51c8be/glance-log/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.219028 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jr4mv_854b990c-d8e5-4735-b5d4-a522969647e9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.242055 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f48b4784d-5jgvr_ed51a505-1c96-4f98-879e-75283649a949/horizon/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.448396 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qkwpv_7b55f097-bc7e-471e-88de-725221c23439/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.480958 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f48b4784d-5jgvr_ed51a505-1c96-4f98-879e-75283649a949/horizon-log/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.724560 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29460841-tkgzh_15fb1cfb-41eb-4567-a694-821f1da15b07/keystone-cron/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.768017 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c8579bfdd-r7vxj_edc2dca8-56cc-43b6-b35d-18b84ff237d3/keystone-api/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.921773 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1cb8a9e8-897c-4005-9ba7-555eeba1b6c1/kube-state-metrics/0.log" Jan 05 22:37:32 crc kubenswrapper[5000]: I0105 22:37:32.953312 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-nk6kw_d3f9a210-263c-4290-8509-6b86ade6772c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:33 crc kubenswrapper[5000]: I0105 22:37:33.319269 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86bdcd58d9-pztv2_43d9e1d2-3e87-4260-ba24-41e7cfbd4326/neutron-httpd/0.log" Jan 05 22:37:33 crc kubenswrapper[5000]: I0105 22:37:33.378598 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86bdcd58d9-pztv2_43d9e1d2-3e87-4260-ba24-41e7cfbd4326/neutron-api/0.log" Jan 05 22:37:33 crc kubenswrapper[5000]: I0105 22:37:33.564159 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hx6kg_e386442b-3735-4e85-8361-5a795c888c81/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:33 crc kubenswrapper[5000]: I0105 22:37:33.996804 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c5dc335-0750-413c-a08d-6aaea2323daf/nova-api-log/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.089186 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3c91798a-921c-4031-8e5f-0752bebcc325/nova-cell0-conductor-conductor/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.393049 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2c5dc335-0750-413c-a08d-6aaea2323daf/nova-api-api/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.444307 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f341f64a-418c-4790-a14a-fc9768d6fc82/nova-cell1-conductor-conductor/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.484172 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_aa822db9-b962-42dd-a6c8-3774d9c6d477/nova-cell1-novncproxy-novncproxy/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.630564 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-j64gt_50f95f21-c8bd-4de7-8f5b-1e236a1d5d7c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:34 crc kubenswrapper[5000]: I0105 22:37:34.804320 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee3ead96-f298-4707-b5aa-3f310fd71ade/nova-metadata-log/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.073195 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a3923d31-eca2-40c4-b412-07b158c9fbcc/nova-scheduler-scheduler/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.315208 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/mysql-bootstrap/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.500501 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/mysql-bootstrap/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.507463 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_43e574d5-969c-40aa-abd6-69f81feef2c5/galera/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.719426 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/mysql-bootstrap/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.938030 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/galera/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.954925 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee3ead96-f298-4707-b5aa-3f310fd71ade/nova-metadata-metadata/0.log" Jan 05 22:37:35 crc kubenswrapper[5000]: I0105 22:37:35.970661 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb55e4be-34e2-4649-aa6a-24b2019cc9cf/mysql-bootstrap/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.127375 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_046f24d3-66d8-4a8b-bd20-d1f79426033b/openstackclient/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.174599 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-48f9l_2f01d9e3-692b-4648-b57f-3fb13e84379a/openstack-network-exporter/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.386715 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server-init/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.529397 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovs-vswitchd/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.555026 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server-init/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.566971 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgdx9_4e574607-e42c-4140-b43a-379ba76f4e73/ovsdb-server/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.744406 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qtwd6_30f46892-7d0f-4bf9-92c7-2f8fbfdd4ee1/ovn-controller/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.839482 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jwldt_d4dde70e-892f-44c4-b19d-d2e6292c2e18/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.956854 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_98ae3293-772a-4a0d-8b5e-245e02531e31/openstack-network-exporter/0.log" Jan 05 22:37:36 crc kubenswrapper[5000]: I0105 22:37:36.988946 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_98ae3293-772a-4a0d-8b5e-245e02531e31/ovn-northd/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.130986 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3e42459b-9f2f-45c6-8a77-6909cc2689a2/openstack-network-exporter/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.206731 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3e42459b-9f2f-45c6-8a77-6909cc2689a2/ovsdbserver-nb/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.353215 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3628fb9-23a7-47e6-853a-e8f31311916f/openstack-network-exporter/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.373376 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f3628fb9-23a7-47e6-853a-e8f31311916f/ovsdbserver-sb/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.560632 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-859855f89d-t6p2g_1aa85c76-2f7d-4716-bd4c-4f6f53b75d01/placement-api/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.651019 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-859855f89d-t6p2g_1aa85c76-2f7d-4716-bd4c-4f6f53b75d01/placement-log/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.679938 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/setup-container/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.840843 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/setup-container/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.873490 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d62d32f0-a7e0-4949-82d3-5e35d8fbf43b/rabbitmq/0.log" Jan 05 22:37:37 crc kubenswrapper[5000]: I0105 22:37:37.996625 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/setup-container/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.152942 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/setup-container/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.209173 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ffcf6bf3-6f91-4afe-ba08-9e058c831480/rabbitmq/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.223153 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jcll2_b441855d-0224-48d7-b39e-0930dbd9d1d5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.324031 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:37:38 crc kubenswrapper[5000]: E0105 22:37:38.324347 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.396464 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8l276_7d8b6f53-b39a-4cd8-9587-92cd0f427528/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.531211 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5bqvh_61ec2645-0703-42ad-96da-136ceb8b9cda/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.618300 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-t76t9_500728b5-6ea6-4696-b63d-36d1a1c64cce/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:38 crc kubenswrapper[5000]: I0105 22:37:38.777427 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n2m7c_c816069b-4834-4cf8-ada8-c7bf3d339ba2/ssh-known-hosts-edpm-deployment/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.010657 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5759bb69bf-chpv9_b3694130-425f-4455-9275-0899d204bc66/proxy-server/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.052743 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nkpzh_bcee38b5-1aa2-4d3f-8545-dfc618226422/swift-ring-rebalance/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.105781 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5759bb69bf-chpv9_b3694130-425f-4455-9275-0899d204bc66/proxy-httpd/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.204946 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-auditor/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.242615 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-reaper/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.381784 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-replicator/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.421874 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/account-server/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.435930 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-auditor/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.582453 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-replicator/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.605071 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-server/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.653709 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/container-updater/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.683055 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-auditor/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.836319 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-replicator/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.846186 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-server/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.876472 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-expirer/0.log" Jan 05 22:37:39 crc kubenswrapper[5000]: I0105 22:37:39.901730 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/object-updater/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.035598 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/swift-recon-cron/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.102519 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b20daab-eee2-4a54-9ded-9ad1fe1c3c1f/rsync/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.226193 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bp8wd_9457bd68-0fcd-45ee-9625-4a82d4ad181d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.333358 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_afff7bec-07b5-49b0-9b93-49f90b6c0214/tempest-tests-tempest-tests-runner/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.452212 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6b25987a-4797-4b1a-be62-fef207e3aadc/test-operator-logs-container/0.log" Jan 05 22:37:40 crc kubenswrapper[5000]: I0105 22:37:40.521809 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-wb2m6_7cff51b1-fa8c-43c0-8563-b83e0b4542cb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 05 22:37:51 crc kubenswrapper[5000]: I0105 22:37:51.459604 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b7b36978-e904-42dc-b2e9-cfd481f5b6f0/memcached/0.log" Jan 05 22:37:52 crc kubenswrapper[5000]: I0105 22:37:52.323828 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:37:52 crc kubenswrapper[5000]: E0105 22:37:52.324145 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:38:03 crc kubenswrapper[5000]: I0105 22:38:03.948877 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.159424 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.166699 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.181398 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.351245 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/pull/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.358655 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/util/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.377624 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_21b6e2e4ef20a3c7234f372b3bad15b515b24cac60e0e558bd1b9e85fc87sw6_2263ae7c-d1ad-4e51-ac66-a254cf554cd3/extract/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.568624 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-mcqdp_3b7bc759-79ec-4375-848d-a4900428e360/manager/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.620217 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-p6wws_97262ac6-99c3-47d4-a2a4-401e945a53c7/manager/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.737020 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-jsbjc_2d94d179-bc23-416d-b4c7-6925b43d7131/manager/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.870643 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-2rhpx_a457b96c-32bc-4fbc-80e2-3567e1fdead4/manager/0.log" Jan 05 22:38:04 crc kubenswrapper[5000]: I0105 22:38:04.952924 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-2q8d7_a5f4bfce-86d7-4e99-984f-2a834fda3018/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.030925 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-rcwpw_c246b6eb-3f29-404c-8b9c-f96bfc9ac87d/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.226205 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-m8qfg_7750c973-b8d1-47f3-90ed-1034a7e6c33c/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.355491 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-n9mxh_87ca26ac-b882-4e9a-8f90-27461a61453e/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.441900 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-h7j5w_fe4fd66d-9294-437e-b21e-c66cf323999e/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.548626 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-zg96g_450de243-6d71-4f61-836a-47028669d2b7/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.656953 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-9smz4_bd739e2a-b4fb-43cb-bbc5-50b44e18bcfd/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.776448 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-ghw2z_d60727e4-58b9-43ed-ae99-0c44cab79dc9/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.917118 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-xrl9g_e376cad9-0c9e-423a-a1fb-b33246417cbb/manager/0.log" Jan 05 22:38:05 crc kubenswrapper[5000]: I0105 22:38:05.951771 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-h5tz2_bb2dd57d-6d64-4048-b69b-749250d948b9/manager/0.log" Jan 05 22:38:06 crc kubenswrapper[5000]: I0105 22:38:06.101165 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7dpm4h_f4d8f065-ce54-4bc9-9caf-e6a131e73a35/manager/0.log" Jan 05 22:38:06 crc kubenswrapper[5000]: I0105 22:38:06.397091 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jm56r_3dfe8a9b-7998-4246-b195-b9a2ab968946/registry-server/0.log" Jan 05 22:38:06 crc kubenswrapper[5000]: I0105 22:38:06.491508 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-59bf84b846-bghfn_e31709ea-50f3-4b79-9851-e6c21b82aa58/operator/0.log" Jan 05 22:38:06 crc kubenswrapper[5000]: I0105 22:38:06.786427 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-lh5t8_42922f7b-4e7e-4ef1-b465-936097b98929/manager/0.log" Jan 05 22:38:06 crc kubenswrapper[5000]: I0105 22:38:06.899311 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-v6nfh_7dab6b1b-c641-4e22-a689-a1dc62da7733/manager/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.041784 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fv4wf_6e1e7b73-65c0-40db-964f-93e2d81d1004/operator/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.216552 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-pk7nh_2a8023f1-b9cf-4fa2-b421-b053941d4c42/manager/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.302831 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cd5f6db77-hgptq_fb31c907-60af-4a8c-a49f-977f28a18e20/manager/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.324018 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:38:07 crc kubenswrapper[5000]: E0105 22:38:07.324302 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.467393 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-9cd8n_1236464f-4580-4f31-ab8b-a22d559aa8c3/manager/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.551945 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-dzjnd_95d67b6f-d50a-49c6-b866-9926f4b9e495/manager/0.log" Jan 05 22:38:07 crc kubenswrapper[5000]: I0105 22:38:07.622758 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-whzx7_5830ae86-6c11-4567-8f4a-28d4e3251c07/manager/0.log" Jan 05 22:38:18 crc kubenswrapper[5000]: I0105 22:38:18.323680 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:38:18 crc kubenswrapper[5000]: E0105 22:38:18.324486 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:38:24 crc kubenswrapper[5000]: I0105 22:38:24.737276 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2cjfv_81ff9dcc-be92-40cf-b45b-ba49fc78918a/control-plane-machine-set-operator/0.log" Jan 05 22:38:24 crc kubenswrapper[5000]: I0105 22:38:24.852154 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cfzn2_096d4722-b423-4819-a8fb-61556963fd3a/kube-rbac-proxy/0.log" Jan 05 22:38:24 crc kubenswrapper[5000]: I0105 22:38:24.914549 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cfzn2_096d4722-b423-4819-a8fb-61556963fd3a/machine-api-operator/0.log" Jan 05 22:38:33 crc kubenswrapper[5000]: I0105 22:38:33.325447 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:38:33 crc kubenswrapper[5000]: E0105 22:38:33.326957 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:38:37 crc kubenswrapper[5000]: I0105 22:38:37.912336 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-d7hcb_0edf1980-d816-4cf8-ac70-c0a92cb8ca7c/cert-manager-controller/0.log" Jan 05 22:38:38 crc kubenswrapper[5000]: I0105 22:38:38.239480 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mvh6l_e567f6b1-10dc-4a2a-9ebb-2837b486af32/cert-manager-cainjector/0.log" Jan 05 22:38:38 crc kubenswrapper[5000]: I0105 22:38:38.398666 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-pgdwz_61ca53f0-4a50-4090-846e-cfe229006c13/cert-manager-webhook/0.log" Jan 05 22:38:47 crc kubenswrapper[5000]: I0105 22:38:47.324458 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:38:47 crc kubenswrapper[5000]: E0105 22:38:47.325353 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:38:49 crc kubenswrapper[5000]: I0105 22:38:49.809077 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-mwb84_51c01670-2f5f-45e5-b50c-10034384df7b/nmstate-console-plugin/0.log" Jan 05 22:38:50 crc kubenswrapper[5000]: I0105 22:38:50.129405 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sgg82_061efcae-2cef-41f7-bae8-69730db02cf2/nmstate-handler/0.log" Jan 05 22:38:50 crc kubenswrapper[5000]: I0105 22:38:50.167675 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-9zbc5_e754b051-d59b-4f7b-9bd4-8ac140b5a8a3/kube-rbac-proxy/0.log" Jan 05 22:38:50 crc kubenswrapper[5000]: I0105 22:38:50.184809 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-9zbc5_e754b051-d59b-4f7b-9bd4-8ac140b5a8a3/nmstate-metrics/0.log" Jan 05 22:38:50 crc kubenswrapper[5000]: I0105 22:38:50.437578 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-r56zg_e2491ff3-21bb-4019-b297-1e6b0bdd9707/nmstate-operator/0.log" Jan 05 22:38:50 crc kubenswrapper[5000]: I0105 22:38:50.458571 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-hf8ck_251a5c5e-01cb-474f-9271-1d8ec430e9ac/nmstate-webhook/0.log" Jan 05 22:39:00 crc kubenswrapper[5000]: I0105 22:39:00.325016 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:39:00 crc kubenswrapper[5000]: E0105 22:39:00.325964 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:39:04 crc kubenswrapper[5000]: I0105 22:39:04.949634 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-fvbvp_768d8155-0383-40d9-993e-fe7a60a3b020/kube-rbac-proxy/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.028696 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-fvbvp_768d8155-0383-40d9-993e-fe7a60a3b020/controller/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.132424 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-ql6m5_468e8ed3-60c2-4cf4-8c3e-be1d5e91674f/frr-k8s-webhook-server/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.231656 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.379883 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.384000 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.400257 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.420476 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.615443 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.616777 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.636212 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.644166 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.818436 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-frr-files/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.878687 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-reloader/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.878754 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/cp-metrics/0.log" Jan 05 22:39:05 crc kubenswrapper[5000]: I0105 22:39:05.881748 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/controller/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.066040 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/frr-metrics/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.091981 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/kube-rbac-proxy/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.099416 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/kube-rbac-proxy-frr/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.327158 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/reloader/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.348255 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5786b66bf7-nhsgw_61add664-ba89-4308-a9bc-fedeb78aa01d/manager/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.533456 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-749c9dfbcd-wjtpt_edb7d669-1a88-412b-8629-ef80169998dd/webhook-server/0.log" Jan 05 22:39:06 crc kubenswrapper[5000]: I0105 22:39:06.737022 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7cjvw_b49f39fb-cf2e-4bae-aefd-e476b4155444/kube-rbac-proxy/0.log" Jan 05 22:39:07 crc kubenswrapper[5000]: I0105 22:39:07.223550 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7cjvw_b49f39fb-cf2e-4bae-aefd-e476b4155444/speaker/0.log" Jan 05 22:39:07 crc kubenswrapper[5000]: I0105 22:39:07.520372 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xdjxg_2c49dab8-fe42-472c-96d4-5bb565f9042b/frr/0.log" Jan 05 22:39:12 crc kubenswrapper[5000]: I0105 22:39:12.324339 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:39:12 crc kubenswrapper[5000]: E0105 22:39:12.325141 5000 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xpvqx_openshift-machine-config-operator(7e7d3ef9-ed44-43ac-826a-1b5606c8487b)\"" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.490699 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.700926 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.712043 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.803571 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.912981 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/util/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.933583 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/extract/0.log" Jan 05 22:39:18 crc kubenswrapper[5000]: I0105 22:39:18.961038 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4rhlzl_ec09c357-2496-458f-8c66-3acb727c58bd/pull/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.089949 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.228449 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.252051 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.272072 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.476583 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/util/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.482085 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/extract/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.493223 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8rwx5j_dc49396f-e546-49a1-afc3-79b06accebaa/pull/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.621937 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.798053 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.799723 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.803967 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.941072 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-content/0.log" Jan 05 22:39:19 crc kubenswrapper[5000]: I0105 22:39:19.986880 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/extract-utilities/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.226411 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.344995 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.430199 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.464152 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.510652 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-527mn_82b26bf1-ce94-4d00-b00d-fda0c33a2dfe/registry-server/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.618720 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-utilities/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.625525 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/extract-content/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.853997 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d8trn_28f7248c-0908-4c50-8c47-14d96f5c8665/marketplace-operator/0.log" Jan 05 22:39:20 crc kubenswrapper[5000]: I0105 22:39:20.952319 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.094566 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.133967 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.200259 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-54c86_8ac8e069-4823-418e-be56-ec272b979420/registry-server/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.225953 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.342958 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.346150 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/extract-content/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.487225 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c5kv5_928d6f47-cdd2-4d32-a807-f94d9cbc05cb/registry-server/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.527530 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.726964 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.741061 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.756081 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.903874 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-utilities/0.log" Jan 05 22:39:21 crc kubenswrapper[5000]: I0105 22:39:21.938025 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/extract-content/0.log" Jan 05 22:39:22 crc kubenswrapper[5000]: I0105 22:39:22.476244 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tnrhc_05627cab-34e2-43e0-abd1-c730dfde0fb3/registry-server/0.log" Jan 05 22:39:27 crc kubenswrapper[5000]: I0105 22:39:27.323525 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:39:28 crc kubenswrapper[5000]: I0105 22:39:28.355788 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"bbcc10d137c200154c90ae1fd2fd27257d68d3b388d631fd33fb92153030072f"} Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.413620 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:34 crc kubenswrapper[5000]: E0105 22:39:34.419397 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d15f90-3765-4845-aec6-31138c05baab" containerName="container-00" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.419420 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d15f90-3765-4845-aec6-31138c05baab" containerName="container-00" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.419825 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d15f90-3765-4845-aec6-31138c05baab" containerName="container-00" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.425761 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.425876 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.518357 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.518483 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82nh\" (UniqueName: \"kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.518523 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.619851 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82nh\" (UniqueName: \"kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.619945 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.620090 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.620748 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.620870 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.652665 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82nh\" (UniqueName: \"kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh\") pod \"community-operators-chhrm\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:34 crc kubenswrapper[5000]: I0105 22:39:34.743636 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:35 crc kubenswrapper[5000]: I0105 22:39:35.342676 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:35 crc kubenswrapper[5000]: I0105 22:39:35.435121 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerStarted","Data":"5f2af954e584f7d036d18aaf22e20c8eba94f7a6d81effc6ac94e9f6b09148fb"} Jan 05 22:39:36 crc kubenswrapper[5000]: I0105 22:39:36.444496 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerID="ae2dd0e69413ec9359a22a696d9c71abe9aaaed2428f263fdcf9fba86d2aba3c" exitCode=0 Jan 05 22:39:36 crc kubenswrapper[5000]: I0105 22:39:36.444596 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerDied","Data":"ae2dd0e69413ec9359a22a696d9c71abe9aaaed2428f263fdcf9fba86d2aba3c"} Jan 05 22:39:37 crc kubenswrapper[5000]: I0105 22:39:37.454204 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerStarted","Data":"9fe90c6dc0765260e105638126b536813456d65182ae4f99401a3e5f0fcfbf50"} Jan 05 22:39:38 crc kubenswrapper[5000]: I0105 22:39:38.464858 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerID="9fe90c6dc0765260e105638126b536813456d65182ae4f99401a3e5f0fcfbf50" exitCode=0 Jan 05 22:39:38 crc kubenswrapper[5000]: I0105 22:39:38.464921 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerDied","Data":"9fe90c6dc0765260e105638126b536813456d65182ae4f99401a3e5f0fcfbf50"} Jan 05 22:39:40 crc kubenswrapper[5000]: I0105 22:39:40.487638 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerStarted","Data":"89c990cdcc8fe264f1544ed653bf30f1e726f854c82d3daed139859066c4a630"} Jan 05 22:39:44 crc kubenswrapper[5000]: I0105 22:39:44.743734 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:44 crc kubenswrapper[5000]: I0105 22:39:44.744299 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:44 crc kubenswrapper[5000]: I0105 22:39:44.806986 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:44 crc kubenswrapper[5000]: I0105 22:39:44.849660 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chhrm" podStartSLOduration=7.982948865 podStartE2EDuration="10.849628326s" podCreationTimestamp="2026-01-05 22:39:34 +0000 UTC" firstStartedPulling="2026-01-05 22:39:36.448158444 +0000 UTC m=+3931.404360913" lastFinishedPulling="2026-01-05 22:39:39.314837905 +0000 UTC m=+3934.271040374" observedRunningTime="2026-01-05 22:39:40.520785681 +0000 UTC m=+3935.476988150" watchObservedRunningTime="2026-01-05 22:39:44.849628326 +0000 UTC m=+3939.805830795" Jan 05 22:39:45 crc kubenswrapper[5000]: I0105 22:39:45.695074 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:45 crc kubenswrapper[5000]: I0105 22:39:45.780231 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:47 crc kubenswrapper[5000]: I0105 22:39:47.548681 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chhrm" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="registry-server" containerID="cri-o://89c990cdcc8fe264f1544ed653bf30f1e726f854c82d3daed139859066c4a630" gracePeriod=2 Jan 05 22:39:48 crc kubenswrapper[5000]: I0105 22:39:48.578147 5000 generic.go:334] "Generic (PLEG): container finished" podID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerID="89c990cdcc8fe264f1544ed653bf30f1e726f854c82d3daed139859066c4a630" exitCode=0 Jan 05 22:39:48 crc kubenswrapper[5000]: I0105 22:39:48.578236 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerDied","Data":"89c990cdcc8fe264f1544ed653bf30f1e726f854c82d3daed139859066c4a630"} Jan 05 22:39:48 crc kubenswrapper[5000]: I0105 22:39:48.917639 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.006261 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities\") pod \"6ebfcb3f-11ab-457c-90a8-973574cf9620\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.006604 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content\") pod \"6ebfcb3f-11ab-457c-90a8-973574cf9620\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.006677 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82nh\" (UniqueName: \"kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh\") pod \"6ebfcb3f-11ab-457c-90a8-973574cf9620\" (UID: \"6ebfcb3f-11ab-457c-90a8-973574cf9620\") " Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.008315 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities" (OuterVolumeSpecName: "utilities") pod "6ebfcb3f-11ab-457c-90a8-973574cf9620" (UID: "6ebfcb3f-11ab-457c-90a8-973574cf9620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.014094 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh" (OuterVolumeSpecName: "kube-api-access-r82nh") pod "6ebfcb3f-11ab-457c-90a8-973574cf9620" (UID: "6ebfcb3f-11ab-457c-90a8-973574cf9620"). InnerVolumeSpecName "kube-api-access-r82nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.053407 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ebfcb3f-11ab-457c-90a8-973574cf9620" (UID: "6ebfcb3f-11ab-457c-90a8-973574cf9620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.109119 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.109163 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82nh\" (UniqueName: \"kubernetes.io/projected/6ebfcb3f-11ab-457c-90a8-973574cf9620-kube-api-access-r82nh\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.109176 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebfcb3f-11ab-457c-90a8-973574cf9620-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.588253 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chhrm" event={"ID":"6ebfcb3f-11ab-457c-90a8-973574cf9620","Type":"ContainerDied","Data":"5f2af954e584f7d036d18aaf22e20c8eba94f7a6d81effc6ac94e9f6b09148fb"} Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.588539 5000 scope.go:117] "RemoveContainer" containerID="89c990cdcc8fe264f1544ed653bf30f1e726f854c82d3daed139859066c4a630" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.588397 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chhrm" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.627736 5000 scope.go:117] "RemoveContainer" containerID="9fe90c6dc0765260e105638126b536813456d65182ae4f99401a3e5f0fcfbf50" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.633433 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.648999 5000 scope.go:117] "RemoveContainer" containerID="ae2dd0e69413ec9359a22a696d9c71abe9aaaed2428f263fdcf9fba86d2aba3c" Jan 05 22:39:49 crc kubenswrapper[5000]: I0105 22:39:49.662653 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chhrm"] Jan 05 22:39:51 crc kubenswrapper[5000]: I0105 22:39:51.337511 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" path="/var/lib/kubelet/pods/6ebfcb3f-11ab-457c-90a8-973574cf9620/volumes" Jan 05 22:41:05 crc kubenswrapper[5000]: I0105 22:41:05.192585 5000 generic.go:334] "Generic (PLEG): container finished" podID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerID="0b85aedafb603ab219985b3c301f3a439e0c9366d28c4d39c5db592fff024b98" exitCode=0 Jan 05 22:41:05 crc kubenswrapper[5000]: I0105 22:41:05.192683 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qd74t/must-gather-wpvps" event={"ID":"074fa3b1-2fbf-4625-b2dd-418fc809bc81","Type":"ContainerDied","Data":"0b85aedafb603ab219985b3c301f3a439e0c9366d28c4d39c5db592fff024b98"} Jan 05 22:41:05 crc kubenswrapper[5000]: I0105 22:41:05.193782 5000 scope.go:117] "RemoveContainer" containerID="0b85aedafb603ab219985b3c301f3a439e0c9366d28c4d39c5db592fff024b98" Jan 05 22:41:06 crc kubenswrapper[5000]: I0105 22:41:06.220188 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qd74t_must-gather-wpvps_074fa3b1-2fbf-4625-b2dd-418fc809bc81/gather/0.log" Jan 05 22:41:16 crc kubenswrapper[5000]: I0105 22:41:16.827561 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qd74t/must-gather-wpvps"] Jan 05 22:41:16 crc kubenswrapper[5000]: I0105 22:41:16.828277 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qd74t/must-gather-wpvps" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="copy" containerID="cri-o://980c860cc74bbe871e5a78c019147d8def9ff18accabe2e265a995b0491a3753" gracePeriod=2 Jan 05 22:41:16 crc kubenswrapper[5000]: I0105 22:41:16.838165 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qd74t/must-gather-wpvps"] Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.294572 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qd74t_must-gather-wpvps_074fa3b1-2fbf-4625-b2dd-418fc809bc81/copy/0.log" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.295193 5000 generic.go:334] "Generic (PLEG): container finished" podID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerID="980c860cc74bbe871e5a78c019147d8def9ff18accabe2e265a995b0491a3753" exitCode=143 Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.399972 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qd74t_must-gather-wpvps_074fa3b1-2fbf-4625-b2dd-418fc809bc81/copy/0.log" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.400335 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.571244 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrgz\" (UniqueName: \"kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz\") pod \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.571343 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output\") pod \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\" (UID: \"074fa3b1-2fbf-4625-b2dd-418fc809bc81\") " Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.584161 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz" (OuterVolumeSpecName: "kube-api-access-6vrgz") pod "074fa3b1-2fbf-4625-b2dd-418fc809bc81" (UID: "074fa3b1-2fbf-4625-b2dd-418fc809bc81"). InnerVolumeSpecName "kube-api-access-6vrgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.673165 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrgz\" (UniqueName: \"kubernetes.io/projected/074fa3b1-2fbf-4625-b2dd-418fc809bc81-kube-api-access-6vrgz\") on node \"crc\" DevicePath \"\"" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.727638 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "074fa3b1-2fbf-4625-b2dd-418fc809bc81" (UID: "074fa3b1-2fbf-4625-b2dd-418fc809bc81"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:41:17 crc kubenswrapper[5000]: I0105 22:41:17.774997 5000 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/074fa3b1-2fbf-4625-b2dd-418fc809bc81-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 05 22:41:18 crc kubenswrapper[5000]: I0105 22:41:18.303239 5000 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qd74t_must-gather-wpvps_074fa3b1-2fbf-4625-b2dd-418fc809bc81/copy/0.log" Jan 05 22:41:18 crc kubenswrapper[5000]: I0105 22:41:18.303581 5000 scope.go:117] "RemoveContainer" containerID="980c860cc74bbe871e5a78c019147d8def9ff18accabe2e265a995b0491a3753" Jan 05 22:41:18 crc kubenswrapper[5000]: I0105 22:41:18.303719 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qd74t/must-gather-wpvps" Jan 05 22:41:18 crc kubenswrapper[5000]: I0105 22:41:18.327933 5000 scope.go:117] "RemoveContainer" containerID="0b85aedafb603ab219985b3c301f3a439e0c9366d28c4d39c5db592fff024b98" Jan 05 22:41:19 crc kubenswrapper[5000]: I0105 22:41:19.335911 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" path="/var/lib/kubelet/pods/074fa3b1-2fbf-4625-b2dd-418fc809bc81/volumes" Jan 05 22:41:53 crc kubenswrapper[5000]: I0105 22:41:53.099646 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:41:53 crc kubenswrapper[5000]: I0105 22:41:53.100859 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:42:23 crc kubenswrapper[5000]: I0105 22:42:23.098777 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:42:23 crc kubenswrapper[5000]: I0105 22:42:23.099377 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.805499 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:42:45 crc kubenswrapper[5000]: E0105 22:42:45.806518 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="copy" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806532 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="copy" Jan 05 22:42:45 crc kubenswrapper[5000]: E0105 22:42:45.806541 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="registry-server" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806547 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="registry-server" Jan 05 22:42:45 crc kubenswrapper[5000]: E0105 22:42:45.806564 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="gather" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806570 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="gather" Jan 05 22:42:45 crc kubenswrapper[5000]: E0105 22:42:45.806591 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="extract-content" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806596 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="extract-content" Jan 05 22:42:45 crc kubenswrapper[5000]: E0105 22:42:45.806615 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="extract-utilities" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806621 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="extract-utilities" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806789 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebfcb3f-11ab-457c-90a8-973574cf9620" containerName="registry-server" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806814 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="copy" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.806826 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="074fa3b1-2fbf-4625-b2dd-418fc809bc81" containerName="gather" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.808317 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.828106 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.885933 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.886052 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6b2\" (UniqueName: \"kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.886093 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.988066 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw6b2\" (UniqueName: \"kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.988143 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.988237 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.988675 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:45 crc kubenswrapper[5000]: I0105 22:42:45.989032 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:46 crc kubenswrapper[5000]: I0105 22:42:46.010516 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw6b2\" (UniqueName: \"kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2\") pod \"redhat-operators-jkzvl\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:46 crc kubenswrapper[5000]: I0105 22:42:46.127050 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:46 crc kubenswrapper[5000]: I0105 22:42:46.583984 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:42:46 crc kubenswrapper[5000]: W0105 22:42:46.593097 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49f1c55_94f7_4117_af14_dee5d541c950.slice/crio-a5613ec7111775a2dcfed4d869d93756a86bc646666a912c76960b0b8f04eaa2 WatchSource:0}: Error finding container a5613ec7111775a2dcfed4d869d93756a86bc646666a912c76960b0b8f04eaa2: Status 404 returned error can't find the container with id a5613ec7111775a2dcfed4d869d93756a86bc646666a912c76960b0b8f04eaa2 Jan 05 22:42:47 crc kubenswrapper[5000]: I0105 22:42:47.292229 5000 generic.go:334] "Generic (PLEG): container finished" podID="c49f1c55-94f7-4117-af14-dee5d541c950" containerID="6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e" exitCode=0 Jan 05 22:42:47 crc kubenswrapper[5000]: I0105 22:42:47.292324 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerDied","Data":"6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e"} Jan 05 22:42:47 crc kubenswrapper[5000]: I0105 22:42:47.292512 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerStarted","Data":"a5613ec7111775a2dcfed4d869d93756a86bc646666a912c76960b0b8f04eaa2"} Jan 05 22:42:47 crc kubenswrapper[5000]: I0105 22:42:47.296663 5000 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 05 22:42:49 crc kubenswrapper[5000]: I0105 22:42:49.311254 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerStarted","Data":"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5"} Jan 05 22:42:50 crc kubenswrapper[5000]: I0105 22:42:50.341286 5000 generic.go:334] "Generic (PLEG): container finished" podID="c49f1c55-94f7-4117-af14-dee5d541c950" containerID="dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5" exitCode=0 Jan 05 22:42:50 crc kubenswrapper[5000]: I0105 22:42:50.341373 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerDied","Data":"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5"} Jan 05 22:42:51 crc kubenswrapper[5000]: I0105 22:42:51.361524 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerStarted","Data":"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10"} Jan 05 22:42:51 crc kubenswrapper[5000]: I0105 22:42:51.387715 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jkzvl" podStartSLOduration=2.877297932 podStartE2EDuration="6.387697176s" podCreationTimestamp="2026-01-05 22:42:45 +0000 UTC" firstStartedPulling="2026-01-05 22:42:47.296382802 +0000 UTC m=+4122.252585271" lastFinishedPulling="2026-01-05 22:42:50.806782046 +0000 UTC m=+4125.762984515" observedRunningTime="2026-01-05 22:42:51.380479391 +0000 UTC m=+4126.336681860" watchObservedRunningTime="2026-01-05 22:42:51.387697176 +0000 UTC m=+4126.343899645" Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.098452 5000 patch_prober.go:28] interesting pod/machine-config-daemon-xpvqx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.099066 5000 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.099115 5000 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.099911 5000 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbcc10d137c200154c90ae1fd2fd27257d68d3b388d631fd33fb92153030072f"} pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.100001 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" podUID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerName="machine-config-daemon" containerID="cri-o://bbcc10d137c200154c90ae1fd2fd27257d68d3b388d631fd33fb92153030072f" gracePeriod=600 Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.379930 5000 generic.go:334] "Generic (PLEG): container finished" podID="7e7d3ef9-ed44-43ac-826a-1b5606c8487b" containerID="bbcc10d137c200154c90ae1fd2fd27257d68d3b388d631fd33fb92153030072f" exitCode=0 Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.380079 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerDied","Data":"bbcc10d137c200154c90ae1fd2fd27257d68d3b388d631fd33fb92153030072f"} Jan 05 22:42:53 crc kubenswrapper[5000]: I0105 22:42:53.380111 5000 scope.go:117] "RemoveContainer" containerID="23823e81cc534a8921a55a2e27e4ad58d233ebe5613fcd0c0cbaeb69639dbc72" Jan 05 22:42:54 crc kubenswrapper[5000]: I0105 22:42:54.390383 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xpvqx" event={"ID":"7e7d3ef9-ed44-43ac-826a-1b5606c8487b","Type":"ContainerStarted","Data":"a5a096dafc7be363197e823f9623fec230d990b3aaa30a79ee63a89349048614"} Jan 05 22:42:56 crc kubenswrapper[5000]: I0105 22:42:56.127336 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:56 crc kubenswrapper[5000]: I0105 22:42:56.127762 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:42:57 crc kubenswrapper[5000]: I0105 22:42:57.172280 5000 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jkzvl" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="registry-server" probeResult="failure" output=< Jan 05 22:42:57 crc kubenswrapper[5000]: timeout: failed to connect service ":50051" within 1s Jan 05 22:42:57 crc kubenswrapper[5000]: > Jan 05 22:43:06 crc kubenswrapper[5000]: I0105 22:43:06.169923 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:43:06 crc kubenswrapper[5000]: I0105 22:43:06.218274 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:43:06 crc kubenswrapper[5000]: I0105 22:43:06.407466 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:43:07 crc kubenswrapper[5000]: I0105 22:43:07.520282 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jkzvl" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="registry-server" containerID="cri-o://f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10" gracePeriod=2 Jan 05 22:43:07 crc kubenswrapper[5000]: I0105 22:43:07.975260 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.098573 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw6b2\" (UniqueName: \"kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2\") pod \"c49f1c55-94f7-4117-af14-dee5d541c950\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.098624 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities\") pod \"c49f1c55-94f7-4117-af14-dee5d541c950\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.098787 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content\") pod \"c49f1c55-94f7-4117-af14-dee5d541c950\" (UID: \"c49f1c55-94f7-4117-af14-dee5d541c950\") " Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.099740 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities" (OuterVolumeSpecName: "utilities") pod "c49f1c55-94f7-4117-af14-dee5d541c950" (UID: "c49f1c55-94f7-4117-af14-dee5d541c950"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.105409 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2" (OuterVolumeSpecName: "kube-api-access-gw6b2") pod "c49f1c55-94f7-4117-af14-dee5d541c950" (UID: "c49f1c55-94f7-4117-af14-dee5d541c950"). InnerVolumeSpecName "kube-api-access-gw6b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.201039 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.201075 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw6b2\" (UniqueName: \"kubernetes.io/projected/c49f1c55-94f7-4117-af14-dee5d541c950-kube-api-access-gw6b2\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.222333 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c49f1c55-94f7-4117-af14-dee5d541c950" (UID: "c49f1c55-94f7-4117-af14-dee5d541c950"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.302926 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c49f1c55-94f7-4117-af14-dee5d541c950-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.535167 5000 generic.go:334] "Generic (PLEG): container finished" podID="c49f1c55-94f7-4117-af14-dee5d541c950" containerID="f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10" exitCode=0 Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.535221 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerDied","Data":"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10"} Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.535257 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jkzvl" event={"ID":"c49f1c55-94f7-4117-af14-dee5d541c950","Type":"ContainerDied","Data":"a5613ec7111775a2dcfed4d869d93756a86bc646666a912c76960b0b8f04eaa2"} Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.535280 5000 scope.go:117] "RemoveContainer" containerID="f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.535377 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jkzvl" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.553653 5000 scope.go:117] "RemoveContainer" containerID="dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.575479 5000 scope.go:117] "RemoveContainer" containerID="6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.633075 5000 scope.go:117] "RemoveContainer" containerID="f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10" Jan 05 22:43:08 crc kubenswrapper[5000]: E0105 22:43:08.633645 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10\": container with ID starting with f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10 not found: ID does not exist" containerID="f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.633712 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10"} err="failed to get container status \"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10\": rpc error: code = NotFound desc = could not find container \"f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10\": container with ID starting with f76b55e696089617db98cbaa8b7d49e932f06ce39c9aa3a961b5cffa12c1dd10 not found: ID does not exist" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.633746 5000 scope.go:117] "RemoveContainer" containerID="dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.633880 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:43:08 crc kubenswrapper[5000]: E0105 22:43:08.634360 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5\": container with ID starting with dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5 not found: ID does not exist" containerID="dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.634389 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5"} err="failed to get container status \"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5\": rpc error: code = NotFound desc = could not find container \"dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5\": container with ID starting with dc991785145e2a4ad5711cbb7428c9f4365334764d64aaab6900b25c0242b4f5 not found: ID does not exist" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.634411 5000 scope.go:117] "RemoveContainer" containerID="6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e" Jan 05 22:43:08 crc kubenswrapper[5000]: E0105 22:43:08.634933 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e\": container with ID starting with 6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e not found: ID does not exist" containerID="6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.634959 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e"} err="failed to get container status \"6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e\": rpc error: code = NotFound desc = could not find container \"6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e\": container with ID starting with 6f79cc43f7743f28bec63ae980bc184da0ceb2d4f93ea212564529fa8673a73e not found: ID does not exist" Jan 05 22:43:08 crc kubenswrapper[5000]: I0105 22:43:08.642734 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jkzvl"] Jan 05 22:43:09 crc kubenswrapper[5000]: I0105 22:43:09.335844 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" path="/var/lib/kubelet/pods/c49f1c55-94f7-4117-af14-dee5d541c950/volumes" Jan 05 22:43:21 crc kubenswrapper[5000]: I0105 22:43:21.525364 5000 scope.go:117] "RemoveContainer" containerID="bc3947be244ade408f2ae7b05571388ca32e919fcfaf6fc8d39c6f502eb98b35" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.354193 5000 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:45 crc kubenswrapper[5000]: E0105 22:43:45.355238 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="extract-content" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.355254 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="extract-content" Jan 05 22:43:45 crc kubenswrapper[5000]: E0105 22:43:45.355271 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="extract-utilities" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.355280 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="extract-utilities" Jan 05 22:43:45 crc kubenswrapper[5000]: E0105 22:43:45.355297 5000 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="registry-server" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.355306 5000 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="registry-server" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.355588 5000 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49f1c55-94f7-4117-af14-dee5d541c950" containerName="registry-server" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.357209 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.376327 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.381052 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.381135 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ksc\" (UniqueName: \"kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.381333 5000 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.483297 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4ksc\" (UniqueName: \"kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.483462 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.484029 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.484483 5000 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.484795 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.505327 5000 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4ksc\" (UniqueName: \"kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc\") pod \"redhat-marketplace-zz877\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:45 crc kubenswrapper[5000]: I0105 22:43:45.680086 5000 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:46 crc kubenswrapper[5000]: I0105 22:43:46.184042 5000 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:46 crc kubenswrapper[5000]: W0105 22:43:46.192188 5000 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc59b433_23a5_4b50_a70d_e0e468a068d9.slice/crio-307bab0cb2253ccdf1daa8f4db03c5d1ab7caa2b3df6c357f4a9cd226c445394 WatchSource:0}: Error finding container 307bab0cb2253ccdf1daa8f4db03c5d1ab7caa2b3df6c357f4a9cd226c445394: Status 404 returned error can't find the container with id 307bab0cb2253ccdf1daa8f4db03c5d1ab7caa2b3df6c357f4a9cd226c445394 Jan 05 22:43:46 crc kubenswrapper[5000]: I0105 22:43:46.871141 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc59b433-23a5-4b50-a70d-e0e468a068d9" containerID="54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6" exitCode=0 Jan 05 22:43:46 crc kubenswrapper[5000]: I0105 22:43:46.871261 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerDied","Data":"54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6"} Jan 05 22:43:46 crc kubenswrapper[5000]: I0105 22:43:46.871437 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerStarted","Data":"307bab0cb2253ccdf1daa8f4db03c5d1ab7caa2b3df6c357f4a9cd226c445394"} Jan 05 22:43:47 crc kubenswrapper[5000]: I0105 22:43:47.884489 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc59b433-23a5-4b50-a70d-e0e468a068d9" containerID="939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482" exitCode=0 Jan 05 22:43:47 crc kubenswrapper[5000]: I0105 22:43:47.884824 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerDied","Data":"939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482"} Jan 05 22:43:48 crc kubenswrapper[5000]: I0105 22:43:48.897833 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerStarted","Data":"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632"} Jan 05 22:43:48 crc kubenswrapper[5000]: I0105 22:43:48.923973 5000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zz877" podStartSLOduration=2.407684238 podStartE2EDuration="3.92395507s" podCreationTimestamp="2026-01-05 22:43:45 +0000 UTC" firstStartedPulling="2026-01-05 22:43:46.874338071 +0000 UTC m=+4181.830540530" lastFinishedPulling="2026-01-05 22:43:48.390608903 +0000 UTC m=+4183.346811362" observedRunningTime="2026-01-05 22:43:48.912557336 +0000 UTC m=+4183.868759835" watchObservedRunningTime="2026-01-05 22:43:48.92395507 +0000 UTC m=+4183.880157539" Jan 05 22:43:55 crc kubenswrapper[5000]: I0105 22:43:55.680510 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:55 crc kubenswrapper[5000]: I0105 22:43:55.681194 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:55 crc kubenswrapper[5000]: I0105 22:43:55.739076 5000 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:56 crc kubenswrapper[5000]: I0105 22:43:56.006812 5000 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:56 crc kubenswrapper[5000]: I0105 22:43:56.069138 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:57 crc kubenswrapper[5000]: I0105 22:43:57.969970 5000 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zz877" podUID="dc59b433-23a5-4b50-a70d-e0e468a068d9" containerName="registry-server" containerID="cri-o://4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632" gracePeriod=2 Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.553388 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.648802 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content\") pod \"dc59b433-23a5-4b50-a70d-e0e468a068d9\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.649080 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities\") pod \"dc59b433-23a5-4b50-a70d-e0e468a068d9\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.649117 5000 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4ksc\" (UniqueName: \"kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc\") pod \"dc59b433-23a5-4b50-a70d-e0e468a068d9\" (UID: \"dc59b433-23a5-4b50-a70d-e0e468a068d9\") " Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.649867 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities" (OuterVolumeSpecName: "utilities") pod "dc59b433-23a5-4b50-a70d-e0e468a068d9" (UID: "dc59b433-23a5-4b50-a70d-e0e468a068d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.656048 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc" (OuterVolumeSpecName: "kube-api-access-c4ksc") pod "dc59b433-23a5-4b50-a70d-e0e468a068d9" (UID: "dc59b433-23a5-4b50-a70d-e0e468a068d9"). InnerVolumeSpecName "kube-api-access-c4ksc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.673087 5000 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc59b433-23a5-4b50-a70d-e0e468a068d9" (UID: "dc59b433-23a5-4b50-a70d-e0e468a068d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.750480 5000 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4ksc\" (UniqueName: \"kubernetes.io/projected/dc59b433-23a5-4b50-a70d-e0e468a068d9-kube-api-access-c4ksc\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.750521 5000 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.750532 5000 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc59b433-23a5-4b50-a70d-e0e468a068d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.978619 5000 generic.go:334] "Generic (PLEG): container finished" podID="dc59b433-23a5-4b50-a70d-e0e468a068d9" containerID="4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632" exitCode=0 Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.978665 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerDied","Data":"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632"} Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.978695 5000 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zz877" event={"ID":"dc59b433-23a5-4b50-a70d-e0e468a068d9","Type":"ContainerDied","Data":"307bab0cb2253ccdf1daa8f4db03c5d1ab7caa2b3df6c357f4a9cd226c445394"} Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.978712 5000 scope.go:117] "RemoveContainer" containerID="4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632" Jan 05 22:43:58 crc kubenswrapper[5000]: I0105 22:43:58.978733 5000 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zz877" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.005264 5000 scope.go:117] "RemoveContainer" containerID="939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.010495 5000 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.020290 5000 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zz877"] Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.029917 5000 scope.go:117] "RemoveContainer" containerID="54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.061647 5000 scope.go:117] "RemoveContainer" containerID="4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632" Jan 05 22:43:59 crc kubenswrapper[5000]: E0105 22:43:59.062216 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632\": container with ID starting with 4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632 not found: ID does not exist" containerID="4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.062317 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632"} err="failed to get container status \"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632\": rpc error: code = NotFound desc = could not find container \"4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632\": container with ID starting with 4574fa3309960861e6f5fb134b04373ab0b9ba8ca1ba5f2e548d2ef9d4091632 not found: ID does not exist" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.062400 5000 scope.go:117] "RemoveContainer" containerID="939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482" Jan 05 22:43:59 crc kubenswrapper[5000]: E0105 22:43:59.062907 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482\": container with ID starting with 939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482 not found: ID does not exist" containerID="939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.062964 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482"} err="failed to get container status \"939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482\": rpc error: code = NotFound desc = could not find container \"939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482\": container with ID starting with 939ec79f9f32c1e66ec0f5a2415628758d910c99178295613161ce76b6345482 not found: ID does not exist" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.063003 5000 scope.go:117] "RemoveContainer" containerID="54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6" Jan 05 22:43:59 crc kubenswrapper[5000]: E0105 22:43:59.063471 5000 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6\": container with ID starting with 54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6 not found: ID does not exist" containerID="54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.063573 5000 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6"} err="failed to get container status \"54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6\": rpc error: code = NotFound desc = could not find container \"54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6\": container with ID starting with 54df239691e9f63ca0165e26aaba4a761ae283445a32b97850f3fd834873baa6 not found: ID does not exist" Jan 05 22:43:59 crc kubenswrapper[5000]: I0105 22:43:59.337610 5000 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc59b433-23a5-4b50-a70d-e0e468a068d9" path="/var/lib/kubelet/pods/dc59b433-23a5-4b50-a70d-e0e468a068d9/volumes" Jan 05 22:44:07 crc kubenswrapper[5000]: I0105 22:44:07.206992 5000 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5759bb69bf-chpv9" podUID="b3694130-425f-4455-9275-0899d204bc66" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502"